2026-03-09T20:13:52.279 INFO:root:teuthology version: 1.2.4.dev6+g1c580df7a 2026-03-09T20:13:52.284 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T20:13:52.313 INFO:teuthology.run:Config: archive_path: /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/641 branch: squid description: orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_monitoring_stack_basic} email: null first_in_suite: false flavor: default job_id: '641' ktype: distro last_in_suite: false machine_type: vps name: kyr-2026-03-09_11:23:05-orch-squid-none-default-vps no_nested_subset: false os_type: ubuntu os_version: '22.04' overrides: admin_socket: branch: squid ansible.cephlab: branch: main skip_tags: nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs vars: timezone: UTC ceph: conf: global: mon election default strategy: 1 mgr: debug mgr: 20 debug ms: 1 mgr/cephadm/use_agent: false mon: debug mon: 20 debug ms: 1 debug paxos: 20 osd: debug ms: 1 debug osd: 20 osd mclock iops capacity threshold hdd: 49000 flavor: default log-ignorelist: - \(MDS_ALL_DOWN\) - \(MDS_UP_LESS_THAN_MAX\) - MON_DOWN - mons down - mon down - out of quorum - CEPHADM_STRAY_DAEMON - CEPHADM_FAILED_DAEMON log-only-match: - CEPHADM_ sha1: e911bdebe5c8faa3800735d1568fcdca65db60df ceph-deploy: conf: client: log file: /var/log/ceph/ceph-$name.$pid.log mon: {} install: ceph: flavor: default sha1: e911bdebe5c8faa3800735d1568fcdca65db60df extra_system_packages: deb: - python3-xmltodict - python3-jmespath rpm: - bzip2 - perl-Test-Harness - python3-xmltodict - python3-jmespath workunit: branch: tt-squid sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 owner: kyr priority: 1000 repo: https://github.com/ceph/ceph.git roles: - - host.a - mon.a - mgr.a - osd.0 - - host.b - mon.b - mgr.b - osd.1 - - host.c - mon.c - osd.2 seed: 3443 sha1: e911bdebe5c8faa3800735d1568fcdca65db60df sleep_before_teardown: 0 subset: 1/64 suite: orch suite_branch: tt-squid suite_path: /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa suite_relpath: qa suite_repo: https://github.com/kshtsk/ceph.git suite_sha1: 569c3e99c9b32a51b4eaf08731c728f4513ed589 targets: vm03.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKRPb5k7uAubZOlnYx2KAz3WwBUCBgcSVrnHkJoH7CKehjVzvf703LVYsPcgRjoKe6UXz+4mJA8ZOShDcEqGXo4= vm04.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLAB8AmIE9CmVqiMnWcuwmlWqsbxtmvaXwnGGJnFdmBnBA5HNnGm784AvBf8s2JSdUI//Z6Mo+Fyt7ZAnakOlU8= vm08.local: ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK6bUif1tgvaCD/PP5A5bhtWdcn0hV56rvythdFThC5axvjSkcUsik59qsQcOAKYNlyBhdoQs5oG+u1Kwty3IVY= tasks: - install: null - cephadm: null - cephadm.shell: host.a: - "set -e\nset -x\nceph orch apply node-exporter\nceph orch apply grafana\nceph\ \ orch apply alertmanager\nceph orch apply prometheus\nsleep 240\nceph orch\ \ ls\nceph orch ps\nceph orch host ls\nMON_DAEMON=$(ceph orch ps --daemon-type\ \ mon -f json | jq -r 'last | .daemon_name')\nGRAFANA_HOST=$(ceph orch ps --daemon-type\ \ grafana -f json | jq -e '.[]' | jq -r '.hostname')\nPROM_HOST=$(ceph orch\ \ ps --daemon-type prometheus -f json | jq -e '.[]' | jq -r '.hostname')\nALERTM_HOST=$(ceph\ \ orch ps --daemon-type alertmanager -f json | jq -e '.[]' | jq -r '.hostname')\n\ GRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST \"$GRAFANA_HOST\"\ \ '.[] | select(.hostname==$GRAFANA_HOST) | .addr')\nPROM_IP=$(ceph orch host\ \ ls -f json | jq -r --arg PROM_HOST \"$PROM_HOST\" '.[] | select(.hostname==$PROM_HOST)\ \ | .addr')\nALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST\ \ \"$ALERTM_HOST\" '.[] | select(.hostname==$ALERTM_HOST) | .addr')\n# check\ \ each host node-exporter metrics endpoint is responsive\nALL_HOST_IPS=$(ceph\ \ orch host ls -f json | jq -r '.[] | .addr')\nfor ip in $ALL_HOST_IPS; do\n\ \ curl -s http://${ip}:9100/metric\ndone\n# check grafana endpoints are responsive\ \ and database health is okay\ncurl -k -s https://${GRAFANA_IP}:3000/api/health\n\ curl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e '.database == \"ok\"\ '\n# stop mon daemon in order to trigger an alert\nceph orch daemon stop $MON_DAEMON\n\ sleep 120\n# check prometheus endpoints are responsive and mon down alert is\ \ firing\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ncurl -s http://${PROM_IP}:9095/api/v1/status/config\ \ | jq -e '.status == \"success\"'\ncurl -s http://${PROM_IP}:9095/api/v1/alerts\n\ curl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e '.data | .alerts | .[]\ \ | select(.labels | .alertname == \"CephMonDown\") | .state == \"firing\"'\n\ # check alertmanager endpoints are responsive and mon down alert is active\n\ curl -s http://${ALERTM_IP}:9093/api/v2/status\ncurl -s http://${ALERTM_IP}:9093/api/v2/alerts\n\ curl -s http://${ALERTM_IP}:9093/api/v2/alerts | jq -e '.[] | select(.labels\ \ | .alertname == \"CephMonDown\") | .status | .state == \"active\"'\n" teuthology: fragments_dropped: [] meta: {} postmerge: [] teuthology_branch: clyso-debian-13 teuthology_repo: https://github.com/clyso/teuthology teuthology_sha1: 1c580df7a9c7c2aadc272da296344fd99f27c444 timestamp: 2026-03-09_11:23:05 tube: vps user: kyr verbose: false worker_log: /home/teuthos/.teuthology/dispatcher/dispatcher.vps.611473 2026-03-09T20:13:52.313 INFO:teuthology.run:suite_path is set to /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa; will attempt to use it 2026-03-09T20:13:52.313 INFO:teuthology.run:Found tasks at /home/teuthos/src/github.com_kshtsk_ceph_569c3e99c9b32a51b4eaf08731c728f4513ed589/qa/tasks 2026-03-09T20:13:52.313 INFO:teuthology.run_tasks:Running task internal.check_packages... 2026-03-09T20:13:52.314 INFO:teuthology.task.internal:Checking packages... 2026-03-09T20:13:52.314 INFO:teuthology.task.internal:Checking packages for os_type 'ubuntu', flavor 'default' and ceph hash 'e911bdebe5c8faa3800735d1568fcdca65db60df' 2026-03-09T20:13:52.314 WARNING:teuthology.packaging:More than one of ref, tag, branch, or sha1 supplied; using branch 2026-03-09T20:13:52.314 INFO:teuthology.packaging:ref: None 2026-03-09T20:13:52.314 INFO:teuthology.packaging:tag: None 2026-03-09T20:13:52.314 INFO:teuthology.packaging:branch: squid 2026-03-09T20:13:52.314 INFO:teuthology.packaging:sha1: e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:13:52.314 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&ref=squid 2026-03-09T20:13:52.961 INFO:teuthology.task.internal:Found packages for ceph version 19.2.3-678-ge911bdeb-1jammy 2026-03-09T20:13:52.962 INFO:teuthology.run_tasks:Running task internal.buildpackages_prep... 2026-03-09T20:13:52.963 INFO:teuthology.task.internal:no buildpackages task found 2026-03-09T20:13:52.963 INFO:teuthology.run_tasks:Running task internal.save_config... 2026-03-09T20:13:52.964 INFO:teuthology.task.internal:Saving configuration 2026-03-09T20:13:52.969 INFO:teuthology.run_tasks:Running task internal.check_lock... 2026-03-09T20:13:52.970 INFO:teuthology.task.internal.check_lock:Checking locks... 2026-03-09T20:13:52.977 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm03.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/641', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 20:12:04.981335', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:03', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKRPb5k7uAubZOlnYx2KAz3WwBUCBgcSVrnHkJoH7CKehjVzvf703LVYsPcgRjoKe6UXz+4mJA8ZOShDcEqGXo4='} 2026-03-09T20:13:52.984 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm04.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/641', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 20:12:04.980796', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:04', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLAB8AmIE9CmVqiMnWcuwmlWqsbxtmvaXwnGGJnFdmBnBA5HNnGm784AvBf8s2JSdUI//Z6Mo+Fyt7ZAnakOlU8='} 2026-03-09T20:13:52.991 DEBUG:teuthology.task.internal.check_lock:machine status is {'name': 'vm08.local', 'description': '/archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/641', 'up': True, 'machine_type': 'vps', 'is_vm': True, 'vm_host': {'name': 'localhost', 'description': None, 'up': True, 'machine_type': 'libvirt', 'is_vm': False, 'vm_host': None, 'os_type': None, 'os_version': None, 'arch': None, 'locked': True, 'locked_since': None, 'locked_by': None, 'mac_address': None, 'ssh_pub_key': None}, 'os_type': 'ubuntu', 'os_version': '22.04', 'arch': 'x86_64', 'locked': True, 'locked_since': '2026-03-09 20:12:04.980380', 'locked_by': 'kyr', 'mac_address': '52:55:00:00:00:08', 'ssh_pub_key': 'ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK6bUif1tgvaCD/PP5A5bhtWdcn0hV56rvythdFThC5axvjSkcUsik59qsQcOAKYNlyBhdoQs5oG+u1Kwty3IVY='} 2026-03-09T20:13:52.991 INFO:teuthology.run_tasks:Running task internal.add_remotes... 2026-03-09T20:13:52.992 INFO:teuthology.task.internal:roles: ubuntu@vm03.local - ['host.a', 'mon.a', 'mgr.a', 'osd.0'] 2026-03-09T20:13:52.992 INFO:teuthology.task.internal:roles: ubuntu@vm04.local - ['host.b', 'mon.b', 'mgr.b', 'osd.1'] 2026-03-09T20:13:52.992 INFO:teuthology.task.internal:roles: ubuntu@vm08.local - ['host.c', 'mon.c', 'osd.2'] 2026-03-09T20:13:52.992 INFO:teuthology.run_tasks:Running task console_log... 2026-03-09T20:13:53.001 DEBUG:teuthology.task.console_log:vm03 does not support IPMI; excluding 2026-03-09T20:13:53.007 DEBUG:teuthology.task.console_log:vm04 does not support IPMI; excluding 2026-03-09T20:13:53.013 DEBUG:teuthology.task.console_log:vm08 does not support IPMI; excluding 2026-03-09T20:13:53.013 DEBUG:teuthology.exit:Installing handler: Handler(exiter=, func=.kill_console_loggers at 0x7fd067a7a170>, signals=[15]) 2026-03-09T20:13:53.013 INFO:teuthology.run_tasks:Running task internal.connect... 2026-03-09T20:13:53.014 INFO:teuthology.task.internal:Opening connections... 2026-03-09T20:13:53.014 DEBUG:teuthology.task.internal:connecting to ubuntu@vm03.local 2026-03-09T20:13:53.015 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T20:13:53.075 DEBUG:teuthology.task.internal:connecting to ubuntu@vm04.local 2026-03-09T20:13:53.075 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T20:13:53.132 DEBUG:teuthology.task.internal:connecting to ubuntu@vm08.local 2026-03-09T20:13:53.133 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T20:13:53.192 INFO:teuthology.run_tasks:Running task internal.push_inventory... 2026-03-09T20:13:53.194 DEBUG:teuthology.orchestra.run.vm03:> uname -m 2026-03-09T20:13:53.197 INFO:teuthology.orchestra.run.vm03.stdout:x86_64 2026-03-09T20:13:53.198 DEBUG:teuthology.orchestra.run.vm03:> cat /etc/os-release 2026-03-09T20:13:53.241 INFO:teuthology.orchestra.run.vm03.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T20:13:53.241 INFO:teuthology.orchestra.run.vm03.stdout:NAME="Ubuntu" 2026-03-09T20:13:53.241 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_ID="22.04" 2026-03-09T20:13:53.241 INFO:teuthology.orchestra.run.vm03.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T20:13:53.241 INFO:teuthology.orchestra.run.vm03.stdout:VERSION_CODENAME=jammy 2026-03-09T20:13:53.241 INFO:teuthology.orchestra.run.vm03.stdout:ID=ubuntu 2026-03-09T20:13:53.241 INFO:teuthology.orchestra.run.vm03.stdout:ID_LIKE=debian 2026-03-09T20:13:53.241 INFO:teuthology.orchestra.run.vm03.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T20:13:53.241 INFO:teuthology.orchestra.run.vm03.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T20:13:53.241 INFO:teuthology.orchestra.run.vm03.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T20:13:53.241 INFO:teuthology.orchestra.run.vm03.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T20:13:53.241 INFO:teuthology.orchestra.run.vm03.stdout:UBUNTU_CODENAME=jammy 2026-03-09T20:13:53.241 INFO:teuthology.lock.ops:Updating vm03.local on lock server 2026-03-09T20:13:53.246 DEBUG:teuthology.orchestra.run.vm04:> uname -m 2026-03-09T20:13:53.249 INFO:teuthology.orchestra.run.vm04.stdout:x86_64 2026-03-09T20:13:53.249 DEBUG:teuthology.orchestra.run.vm04:> cat /etc/os-release 2026-03-09T20:13:53.293 INFO:teuthology.orchestra.run.vm04.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T20:13:53.293 INFO:teuthology.orchestra.run.vm04.stdout:NAME="Ubuntu" 2026-03-09T20:13:53.293 INFO:teuthology.orchestra.run.vm04.stdout:VERSION_ID="22.04" 2026-03-09T20:13:53.293 INFO:teuthology.orchestra.run.vm04.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T20:13:53.293 INFO:teuthology.orchestra.run.vm04.stdout:VERSION_CODENAME=jammy 2026-03-09T20:13:53.293 INFO:teuthology.orchestra.run.vm04.stdout:ID=ubuntu 2026-03-09T20:13:53.293 INFO:teuthology.orchestra.run.vm04.stdout:ID_LIKE=debian 2026-03-09T20:13:53.293 INFO:teuthology.orchestra.run.vm04.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T20:13:53.293 INFO:teuthology.orchestra.run.vm04.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T20:13:53.293 INFO:teuthology.orchestra.run.vm04.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T20:13:53.293 INFO:teuthology.orchestra.run.vm04.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T20:13:53.293 INFO:teuthology.orchestra.run.vm04.stdout:UBUNTU_CODENAME=jammy 2026-03-09T20:13:53.293 INFO:teuthology.lock.ops:Updating vm04.local on lock server 2026-03-09T20:13:53.298 DEBUG:teuthology.orchestra.run.vm08:> uname -m 2026-03-09T20:13:53.302 INFO:teuthology.orchestra.run.vm08.stdout:x86_64 2026-03-09T20:13:53.302 DEBUG:teuthology.orchestra.run.vm08:> cat /etc/os-release 2026-03-09T20:13:53.348 INFO:teuthology.orchestra.run.vm08.stdout:PRETTY_NAME="Ubuntu 22.04.5 LTS" 2026-03-09T20:13:53.348 INFO:teuthology.orchestra.run.vm08.stdout:NAME="Ubuntu" 2026-03-09T20:13:53.348 INFO:teuthology.orchestra.run.vm08.stdout:VERSION_ID="22.04" 2026-03-09T20:13:53.348 INFO:teuthology.orchestra.run.vm08.stdout:VERSION="22.04.5 LTS (Jammy Jellyfish)" 2026-03-09T20:13:53.348 INFO:teuthology.orchestra.run.vm08.stdout:VERSION_CODENAME=jammy 2026-03-09T20:13:53.348 INFO:teuthology.orchestra.run.vm08.stdout:ID=ubuntu 2026-03-09T20:13:53.348 INFO:teuthology.orchestra.run.vm08.stdout:ID_LIKE=debian 2026-03-09T20:13:53.348 INFO:teuthology.orchestra.run.vm08.stdout:HOME_URL="https://www.ubuntu.com/" 2026-03-09T20:13:53.348 INFO:teuthology.orchestra.run.vm08.stdout:SUPPORT_URL="https://help.ubuntu.com/" 2026-03-09T20:13:53.348 INFO:teuthology.orchestra.run.vm08.stdout:BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" 2026-03-09T20:13:53.348 INFO:teuthology.orchestra.run.vm08.stdout:PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" 2026-03-09T20:13:53.348 INFO:teuthology.orchestra.run.vm08.stdout:UBUNTU_CODENAME=jammy 2026-03-09T20:13:53.348 INFO:teuthology.lock.ops:Updating vm08.local on lock server 2026-03-09T20:13:53.354 INFO:teuthology.run_tasks:Running task internal.serialize_remote_roles... 2026-03-09T20:13:53.356 INFO:teuthology.run_tasks:Running task internal.check_conflict... 2026-03-09T20:13:53.357 INFO:teuthology.task.internal:Checking for old test directory... 2026-03-09T20:13:53.357 DEBUG:teuthology.orchestra.run.vm03:> test '!' -e /home/ubuntu/cephtest 2026-03-09T20:13:53.358 DEBUG:teuthology.orchestra.run.vm04:> test '!' -e /home/ubuntu/cephtest 2026-03-09T20:13:53.359 DEBUG:teuthology.orchestra.run.vm08:> test '!' -e /home/ubuntu/cephtest 2026-03-09T20:13:53.395 INFO:teuthology.run_tasks:Running task internal.check_ceph_data... 2026-03-09T20:13:53.454 INFO:teuthology.task.internal:Checking for non-empty /var/lib/ceph... 2026-03-09T20:13:53.455 DEBUG:teuthology.orchestra.run.vm03:> test -z $(ls -A /var/lib/ceph) 2026-03-09T20:13:53.456 DEBUG:teuthology.orchestra.run.vm04:> test -z $(ls -A /var/lib/ceph) 2026-03-09T20:13:53.457 DEBUG:teuthology.orchestra.run.vm08:> test -z $(ls -A /var/lib/ceph) 2026-03-09T20:13:53.458 INFO:teuthology.orchestra.run.vm03.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T20:13:53.459 INFO:teuthology.orchestra.run.vm04.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T20:13:53.461 INFO:teuthology.orchestra.run.vm08.stderr:ls: cannot access '/var/lib/ceph': No such file or directory 2026-03-09T20:13:53.462 INFO:teuthology.run_tasks:Running task internal.vm_setup... 2026-03-09T20:13:53.474 DEBUG:teuthology.orchestra.run.vm03:> test -e /ceph-qa-ready 2026-03-09T20:13:53.504 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:13:53.763 DEBUG:teuthology.orchestra.run.vm04:> test -e /ceph-qa-ready 2026-03-09T20:13:53.765 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:13:54.084 DEBUG:teuthology.orchestra.run.vm08:> test -e /ceph-qa-ready 2026-03-09T20:13:54.087 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:13:54.315 INFO:teuthology.run_tasks:Running task internal.base... 2026-03-09T20:13:54.317 INFO:teuthology.task.internal:Creating test directory... 2026-03-09T20:13:54.317 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T20:13:54.318 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T20:13:54.319 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest 2026-03-09T20:13:54.322 INFO:teuthology.run_tasks:Running task internal.archive_upload... 2026-03-09T20:13:54.323 INFO:teuthology.run_tasks:Running task internal.archive... 2026-03-09T20:13:54.324 INFO:teuthology.task.internal:Creating archive directory... 2026-03-09T20:13:54.324 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T20:13:54.362 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T20:13:54.366 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive 2026-03-09T20:13:54.371 INFO:teuthology.run_tasks:Running task internal.coredump... 2026-03-09T20:13:54.372 INFO:teuthology.task.internal:Enabling coredump saving... 2026-03-09T20:13:54.372 DEBUG:teuthology.orchestra.run.vm03:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T20:13:54.408 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:13:54.408 DEBUG:teuthology.orchestra.run.vm04:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T20:13:54.411 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:13:54.412 DEBUG:teuthology.orchestra.run.vm08:> test -f /run/.containerenv -o -f /.dockerenv 2026-03-09T20:13:54.414 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:13:54.414 DEBUG:teuthology.orchestra.run.vm03:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T20:13:54.450 DEBUG:teuthology.orchestra.run.vm04:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T20:13:54.454 DEBUG:teuthology.orchestra.run.vm08:> install -d -m0755 -- /home/ubuntu/cephtest/archive/coredump && sudo sysctl -w kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core && echo kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core | sudo tee -a /etc/sysctl.conf 2026-03-09T20:13:54.458 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T20:13:54.462 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T20:13:54.463 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T20:13:54.464 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = /home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T20:13:54.467 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T20:13:54.469 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern=/home/ubuntu/cephtest/archive/coredump/%t.%p.core 2026-03-09T20:13:54.470 INFO:teuthology.run_tasks:Running task internal.sudo... 2026-03-09T20:13:54.472 INFO:teuthology.task.internal:Configuring sudo... 2026-03-09T20:13:54.472 DEBUG:teuthology.orchestra.run.vm03:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T20:13:54.506 DEBUG:teuthology.orchestra.run.vm04:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T20:13:54.510 DEBUG:teuthology.orchestra.run.vm08:> sudo sed -i.orig.teuthology -e 's/^\([^#]*\) \(requiretty\)/\1 !\2/g' -e 's/^\([^#]*\) !\(visiblepw\)/\1 \2/g' /etc/sudoers 2026-03-09T20:13:54.519 INFO:teuthology.run_tasks:Running task internal.syslog... 2026-03-09T20:13:54.521 INFO:teuthology.task.internal.syslog:Starting syslog monitoring... 2026-03-09T20:13:54.521 DEBUG:teuthology.orchestra.run.vm03:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T20:13:54.554 DEBUG:teuthology.orchestra.run.vm04:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T20:13:54.562 DEBUG:teuthology.orchestra.run.vm08:> mkdir -p -m0755 -- /home/ubuntu/cephtest/archive/syslog 2026-03-09T20:13:54.565 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T20:13:54.600 DEBUG:teuthology.orchestra.run.vm03:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T20:13:54.644 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T20:13:54.644 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T20:13:54.697 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T20:13:54.700 DEBUG:teuthology.orchestra.run.vm04:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T20:13:54.744 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T20:13:54.744 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T20:13:54.794 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T20:13:54.798 DEBUG:teuthology.orchestra.run.vm08:> install -m 666 /dev/null /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T20:13:54.842 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T20:13:54.842 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/rsyslog.d/80-cephtest.conf 2026-03-09T20:13:54.892 DEBUG:teuthology.orchestra.run.vm03:> sudo service rsyslog restart 2026-03-09T20:13:54.893 DEBUG:teuthology.orchestra.run.vm04:> sudo service rsyslog restart 2026-03-09T20:13:54.894 DEBUG:teuthology.orchestra.run.vm08:> sudo service rsyslog restart 2026-03-09T20:13:54.954 INFO:teuthology.run_tasks:Running task internal.timer... 2026-03-09T20:13:54.955 INFO:teuthology.task.internal:Starting timer... 2026-03-09T20:13:54.955 INFO:teuthology.run_tasks:Running task pcp... 2026-03-09T20:13:54.958 INFO:teuthology.run_tasks:Running task selinux... 2026-03-09T20:13:54.960 INFO:teuthology.task.selinux:Excluding vm03: VMs are not yet supported 2026-03-09T20:13:54.960 INFO:teuthology.task.selinux:Excluding vm04: VMs are not yet supported 2026-03-09T20:13:54.960 INFO:teuthology.task.selinux:Excluding vm08: VMs are not yet supported 2026-03-09T20:13:54.961 DEBUG:teuthology.task.selinux:Getting current SELinux state 2026-03-09T20:13:54.961 DEBUG:teuthology.task.selinux:Existing SELinux modes: {} 2026-03-09T20:13:54.961 INFO:teuthology.task.selinux:Putting SELinux into permissive mode 2026-03-09T20:13:54.961 INFO:teuthology.run_tasks:Running task ansible.cephlab... 2026-03-09T20:13:54.962 DEBUG:teuthology.task:Applying overrides for task ansible.cephlab: {'branch': 'main', 'skip_tags': 'nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs', 'vars': {'timezone': 'UTC'}} 2026-03-09T20:13:54.962 DEBUG:teuthology.repo_utils:Setting repo remote to https://github.com/ceph/ceph-cm-ansible.git 2026-03-09T20:13:54.964 INFO:teuthology.repo_utils:Fetching github.com_ceph_ceph-cm-ansible_main from origin 2026-03-09T20:13:55.475 DEBUG:teuthology.repo_utils:Resetting repo at /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main to origin/main 2026-03-09T20:13:55.482 INFO:teuthology.task.ansible:Playbook: [{'import_playbook': 'ansible_managed.yml'}, {'import_playbook': 'teuthology.yml'}, {'hosts': 'testnodes', 'tasks': [{'set_fact': {'ran_from_cephlab_playbook': True}}]}, {'import_playbook': 'testnodes.yml'}, {'import_playbook': 'container-host.yml'}, {'import_playbook': 'cobbler.yml'}, {'import_playbook': 'paddles.yml'}, {'import_playbook': 'pulpito.yml'}, {'hosts': 'testnodes', 'become': True, 'tasks': [{'name': 'Touch /ceph-qa-ready', 'file': {'path': '/ceph-qa-ready', 'state': 'touch'}, 'when': 'ran_from_cephlab_playbook|bool'}]}] 2026-03-09T20:13:55.482 DEBUG:teuthology.task.ansible:Running ansible-playbook -v --extra-vars '{"ansible_ssh_user": "ubuntu", "timezone": "UTC"}' -i /tmp/teuth_ansible_inventorygjyh22q1 --limit vm03.local,vm04.local,vm08.local /home/teuthos/src/github.com_ceph_ceph-cm-ansible_main/cephlab.yml --skip-tags nagios,monitoring-scripts,hostname,pubkeys,zap,sudoers,kerberos,ntp-client,resolvconf,cpan,nfs 2026-03-09T20:16:23.369 DEBUG:teuthology.task.ansible:Reconnecting to [Remote(name='ubuntu@vm03.local'), Remote(name='ubuntu@vm04.local'), Remote(name='ubuntu@vm08.local')] 2026-03-09T20:16:23.369 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm03.local' 2026-03-09T20:16:23.369 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm03.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T20:16:23.432 DEBUG:teuthology.orchestra.run.vm03:> true 2026-03-09T20:16:23.636 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm03.local' 2026-03-09T20:16:23.636 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm04.local' 2026-03-09T20:16:23.637 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm04.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T20:16:23.695 DEBUG:teuthology.orchestra.run.vm04:> true 2026-03-09T20:16:23.892 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm04.local' 2026-03-09T20:16:23.892 INFO:teuthology.orchestra.remote:Trying to reconnect to host 'ubuntu@vm08.local' 2026-03-09T20:16:23.892 DEBUG:teuthology.orchestra.connection:{'hostname': 'vm08.local', 'username': 'ubuntu', 'timeout': 60} 2026-03-09T20:16:23.955 DEBUG:teuthology.orchestra.run.vm08:> true 2026-03-09T20:16:24.160 INFO:teuthology.orchestra.remote:Successfully reconnected to host 'ubuntu@vm08.local' 2026-03-09T20:16:24.161 INFO:teuthology.run_tasks:Running task clock... 2026-03-09T20:16:24.163 INFO:teuthology.task.clock:Syncing clocks and checking initial clock skew... 2026-03-09T20:16:24.163 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T20:16:24.163 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T20:16:24.164 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T20:16:24.164 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T20:16:24.166 INFO:teuthology.orchestra.run:Running command with timeout 360 2026-03-09T20:16:24.166 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ntp.service || sudo systemctl stop ntpd.service || sudo systemctl stop chronyd.service ; sudo ntpd -gq || sudo chronyc makestep ; sudo systemctl start ntp.service || sudo systemctl start ntpd.service || sudo systemctl start chronyd.service ; PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T20:16:24.179 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T20:16:24.179 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: Command line: ntpd -gq 2026-03-09T20:16:24.179 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: ---------------------------------------------------- 2026-03-09T20:16:24.179 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T20:16:24.179 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T20:16:24.179 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: corporation. Support and training for ntp-4 are 2026-03-09T20:16:24.179 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: available at https://www.nwtime.org/support 2026-03-09T20:16:24.179 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: ---------------------------------------------------- 2026-03-09T20:16:24.179 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: proto: precision = 0.040 usec (-24) 2026-03-09T20:16:24.180 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: basedate set to 2022-02-04 2026-03-09T20:16:24.180 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: gps base set to 2022-02-06 (week 2196) 2026-03-09T20:16:24.180 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T20:16:24.180 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T20:16:24.180 INFO:teuthology.orchestra.run.vm03.stderr: 9 Mar 20:16:24 ntpd[16081]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T20:16:24.181 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T20:16:24.181 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T20:16:24.181 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T20:16:24.181 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: Listen normally on 3 ens3 192.168.123.103:123 2026-03-09T20:16:24.181 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: Listen normally on 4 lo [::1]:123 2026-03-09T20:16:24.181 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:3%2]:123 2026-03-09T20:16:24.181 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:24 ntpd[16081]: Listening on routing socket on fd #22 for interface updates 2026-03-09T20:16:24.182 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: Command line: ntpd -gq 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: ---------------------------------------------------- 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: corporation. Support and training for ntp-4 are 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: available at https://www.nwtime.org/support 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: ---------------------------------------------------- 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: proto: precision = 0.029 usec (-25) 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: basedate set to 2022-02-04 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: gps base set to 2022-02-06 (week 2196) 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stderr: 9 Mar 20:16:24 ntpd[16111]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: Listen normally on 3 ens3 192.168.123.104:123 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: Listen normally on 4 lo [::1]:123 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:4%2]:123 2026-03-09T20:16:24.183 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:24 ntpd[16111]: Listening on routing socket on fd #22 for interface updates 2026-03-09T20:16:24.218 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: ntpd 4.2.8p15@1.3728-o Wed Feb 16 17:13:02 UTC 2022 (1): Starting 2026-03-09T20:16:24.218 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: Command line: ntpd -gq 2026-03-09T20:16:24.218 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: ---------------------------------------------------- 2026-03-09T20:16:24.218 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: ntp-4 is maintained by Network Time Foundation, 2026-03-09T20:16:24.218 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: Inc. (NTF), a non-profit 501(c)(3) public-benefit 2026-03-09T20:16:24.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: corporation. Support and training for ntp-4 are 2026-03-09T20:16:24.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: available at https://www.nwtime.org/support 2026-03-09T20:16:24.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: ---------------------------------------------------- 2026-03-09T20:16:24.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: proto: precision = 0.030 usec (-25) 2026-03-09T20:16:24.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: basedate set to 2022-02-04 2026-03-09T20:16:24.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: gps base set to 2022-02-06 (week 2196) 2026-03-09T20:16:24.220 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): good hash signature 2026-03-09T20:16:24.220 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): loaded, expire=2025-12-28T00:00:00Z last=2017-01-01T00:00:00Z ofs=37 2026-03-09T20:16:24.220 INFO:teuthology.orchestra.run.vm08.stderr: 9 Mar 20:16:24 ntpd[16106]: leapsecond file ('/usr/share/zoneinfo/leap-seconds.list'): expired 72 days ago 2026-03-09T20:16:24.221 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: Listen and drop on 0 v6wildcard [::]:123 2026-03-09T20:16:24.221 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: Listen and drop on 1 v4wildcard 0.0.0.0:123 2026-03-09T20:16:24.221 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: Listen normally on 2 lo 127.0.0.1:123 2026-03-09T20:16:24.221 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: Listen normally on 3 ens3 192.168.123.108:123 2026-03-09T20:16:24.221 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: Listen normally on 4 lo [::1]:123 2026-03-09T20:16:24.221 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: Listen normally on 5 ens3 [fe80::5055:ff:fe00:8%2]:123 2026-03-09T20:16:24.221 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:24 ntpd[16106]: Listening on routing socket on fd #22 for interface updates 2026-03-09T20:16:25.180 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:25 ntpd[16081]: Soliciting pool server 176.9.8.206 2026-03-09T20:16:25.182 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:25 ntpd[16111]: Soliciting pool server 176.9.8.206 2026-03-09T20:16:25.220 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:25 ntpd[16106]: Soliciting pool server 195.201.125.53 2026-03-09T20:16:26.179 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:26 ntpd[16081]: Soliciting pool server 131.188.3.220 2026-03-09T20:16:26.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:26 ntpd[16111]: Soliciting pool server 131.188.3.220 2026-03-09T20:16:26.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:26 ntpd[16106]: Soliciting pool server 176.9.8.206 2026-03-09T20:16:26.333 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:26 ntpd[16106]: Soliciting pool server 158.101.188.125 2026-03-09T20:16:26.333 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:26 ntpd[16081]: Soliciting pool server 158.101.188.125 2026-03-09T20:16:26.333 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:26 ntpd[16111]: Soliciting pool server 158.101.188.125 2026-03-09T20:16:27.178 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:27 ntpd[16081]: Soliciting pool server 141.144.241.16 2026-03-09T20:16:27.178 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:27 ntpd[16081]: Soliciting pool server 93.177.65.20 2026-03-09T20:16:27.179 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:27 ntpd[16081]: Soliciting pool server 91.202.42.82 2026-03-09T20:16:27.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:27 ntpd[16111]: Soliciting pool server 141.144.241.16 2026-03-09T20:16:27.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:27 ntpd[16111]: Soliciting pool server 93.177.65.20 2026-03-09T20:16:27.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:27 ntpd[16111]: Soliciting pool server 91.202.42.82 2026-03-09T20:16:27.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:27 ntpd[16106]: Soliciting pool server 141.144.241.16 2026-03-09T20:16:27.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:27 ntpd[16106]: Soliciting pool server 131.188.3.220 2026-03-09T20:16:27.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:27 ntpd[16106]: Soliciting pool server 51.75.67.47 2026-03-09T20:16:28.178 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:28 ntpd[16081]: Soliciting pool server 144.76.139.8 2026-03-09T20:16:28.178 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:28 ntpd[16081]: Soliciting pool server 185.252.140.126 2026-03-09T20:16:28.178 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:28 ntpd[16081]: Soliciting pool server 195.201.125.53 2026-03-09T20:16:28.178 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:28 ntpd[16081]: Soliciting pool server 152.53.184.199 2026-03-09T20:16:28.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:28 ntpd[16111]: Soliciting pool server 144.76.139.8 2026-03-09T20:16:28.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:28 ntpd[16111]: Soliciting pool server 185.252.140.126 2026-03-09T20:16:28.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:28 ntpd[16111]: Soliciting pool server 195.201.125.53 2026-03-09T20:16:28.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:28 ntpd[16111]: Soliciting pool server 152.53.184.199 2026-03-09T20:16:28.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:28 ntpd[16106]: Soliciting pool server 91.202.42.82 2026-03-09T20:16:28.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:28 ntpd[16106]: Soliciting pool server 185.252.140.126 2026-03-09T20:16:28.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:28 ntpd[16106]: Soliciting pool server 93.177.65.20 2026-03-09T20:16:28.220 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:28 ntpd[16106]: Soliciting pool server 176.9.44.212 2026-03-09T20:16:29.177 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:29 ntpd[16081]: Soliciting pool server 141.144.246.224 2026-03-09T20:16:29.177 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:29 ntpd[16081]: Soliciting pool server 152.53.191.142 2026-03-09T20:16:29.177 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:29 ntpd[16081]: Soliciting pool server 93.241.86.156 2026-03-09T20:16:29.178 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:29 ntpd[16081]: Soliciting pool server 185.125.190.57 2026-03-09T20:16:29.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:29 ntpd[16111]: Soliciting pool server 141.144.246.224 2026-03-09T20:16:29.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:29 ntpd[16111]: Soliciting pool server 152.53.191.142 2026-03-09T20:16:29.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:29 ntpd[16111]: Soliciting pool server 93.241.86.156 2026-03-09T20:16:29.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:29 ntpd[16111]: Soliciting pool server 185.125.190.57 2026-03-09T20:16:29.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:29 ntpd[16106]: Soliciting pool server 152.53.184.199 2026-03-09T20:16:29.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:29 ntpd[16106]: Soliciting pool server 144.76.139.8 2026-03-09T20:16:29.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:29 ntpd[16106]: Soliciting pool server 93.241.86.156 2026-03-09T20:16:29.220 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:29 ntpd[16106]: Soliciting pool server 91.189.91.157 2026-03-09T20:16:30.177 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:30 ntpd[16081]: Soliciting pool server 185.125.190.56 2026-03-09T20:16:30.177 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:30 ntpd[16081]: Soliciting pool server 78.46.87.46 2026-03-09T20:16:30.177 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:30 ntpd[16081]: Soliciting pool server 51.75.67.47 2026-03-09T20:16:30.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:30 ntpd[16111]: Soliciting pool server 185.125.190.56 2026-03-09T20:16:30.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:30 ntpd[16111]: Soliciting pool server 78.46.87.46 2026-03-09T20:16:30.181 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:30 ntpd[16111]: Soliciting pool server 51.75.67.47 2026-03-09T20:16:30.219 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:30 ntpd[16106]: Soliciting pool server 185.125.190.57 2026-03-09T20:16:30.220 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:30 ntpd[16106]: Soliciting pool server 141.144.246.224 2026-03-09T20:16:30.220 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:30 ntpd[16106]: Soliciting pool server 152.53.191.142 2026-03-09T20:16:33.201 INFO:teuthology.orchestra.run.vm03.stdout: 9 Mar 20:16:33 ntpd[16081]: ntpd: time slew +0.005842 s 2026-03-09T20:16:33.201 INFO:teuthology.orchestra.run.vm03.stdout:ntpd: time slew +0.005842s 2026-03-09T20:16:33.222 INFO:teuthology.orchestra.run.vm03.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T20:16:33.250 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================== 2026-03-09T20:16:33.250 INFO:teuthology.orchestra.run.vm03.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:33.250 INFO:teuthology.orchestra.run.vm03.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:33.250 INFO:teuthology.orchestra.run.vm03.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:33.250 INFO:teuthology.orchestra.run.vm03.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:33.250 INFO:teuthology.orchestra.run.vm03.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:35.206 INFO:teuthology.orchestra.run.vm04.stdout: 9 Mar 20:16:35 ntpd[16111]: ntpd: time slew -0.000005 s 2026-03-09T20:16:35.206 INFO:teuthology.orchestra.run.vm04.stdout:ntpd: time slew -0.000005s 2026-03-09T20:16:35.227 INFO:teuthology.orchestra.run.vm04.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T20:16:35.227 INFO:teuthology.orchestra.run.vm04.stdout:============================================================================== 2026-03-09T20:16:35.227 INFO:teuthology.orchestra.run.vm04.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:35.227 INFO:teuthology.orchestra.run.vm04.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:35.227 INFO:teuthology.orchestra.run.vm04.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:35.227 INFO:teuthology.orchestra.run.vm04.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:35.227 INFO:teuthology.orchestra.run.vm04.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:35.241 INFO:teuthology.orchestra.run.vm08.stdout: 9 Mar 20:16:35 ntpd[16106]: ntpd: time slew +0.002295 s 2026-03-09T20:16:35.241 INFO:teuthology.orchestra.run.vm08.stdout:ntpd: time slew +0.002295s 2026-03-09T20:16:35.263 INFO:teuthology.orchestra.run.vm08.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T20:16:35.263 INFO:teuthology.orchestra.run.vm08.stdout:============================================================================== 2026-03-09T20:16:35.263 INFO:teuthology.orchestra.run.vm08.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:35.263 INFO:teuthology.orchestra.run.vm08.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:35.263 INFO:teuthology.orchestra.run.vm08.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:35.263 INFO:teuthology.orchestra.run.vm08.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:35.263 INFO:teuthology.orchestra.run.vm08.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:16:35.263 INFO:teuthology.run_tasks:Running task install... 2026-03-09T20:16:35.265 DEBUG:teuthology.task.install:project ceph 2026-03-09T20:16:35.265 DEBUG:teuthology.task.install:INSTALL overrides: {'ceph': {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'}, 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T20:16:35.265 DEBUG:teuthology.task.install:config {'flavor': 'default', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'extra_system_packages': {'deb': ['python3-xmltodict', 'python3-jmespath'], 'rpm': ['bzip2', 'perl-Test-Harness', 'python3-xmltodict', 'python3-jmespath']}} 2026-03-09T20:16:35.265 INFO:teuthology.task.install:Using flavor: default 2026-03-09T20:16:35.268 DEBUG:teuthology.task.install:Package list is: {'deb': ['ceph', 'cephadm', 'ceph-mds', 'ceph-mgr', 'ceph-common', 'ceph-fuse', 'ceph-test', 'ceph-volume', 'radosgw', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'libcephfs2', 'libcephfs-dev', 'librados2', 'librbd1', 'rbd-fuse'], 'rpm': ['ceph-radosgw', 'ceph-test', 'ceph', 'ceph-base', 'cephadm', 'ceph-immutable-object-cache', 'ceph-mgr', 'ceph-mgr-dashboard', 'ceph-mgr-diskprediction-local', 'ceph-mgr-rook', 'ceph-mgr-cephadm', 'ceph-fuse', 'ceph-volume', 'librados-devel', 'libcephfs2', 'libcephfs-devel', 'librados2', 'librbd1', 'python3-rados', 'python3-rgw', 'python3-cephfs', 'python3-rbd', 'rbd-fuse', 'rbd-mirror', 'rbd-nbd']} 2026-03-09T20:16:35.268 INFO:teuthology.task.install:extra packages: [] 2026-03-09T20:16:35.268 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-key list | grep Ceph 2026-03-09T20:16:35.268 DEBUG:teuthology.orchestra.run.vm04:> sudo apt-key list | grep Ceph 2026-03-09T20:16:35.268 DEBUG:teuthology.orchestra.run.vm08:> sudo apt-key list | grep Ceph 2026-03-09T20:16:35.305 INFO:teuthology.orchestra.run.vm03.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T20:16:35.311 INFO:teuthology.orchestra.run.vm04.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T20:16:35.326 INFO:teuthology.orchestra.run.vm03.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T20:16:35.326 INFO:teuthology.orchestra.run.vm03.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T20:16:35.327 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T20:16:35.327 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T20:16:35.327 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:16:35.330 INFO:teuthology.orchestra.run.vm04.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T20:16:35.331 INFO:teuthology.orchestra.run.vm04.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T20:16:35.331 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T20:16:35.331 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T20:16:35.331 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:16:35.418 INFO:teuthology.orchestra.run.vm08.stderr:Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)). 2026-03-09T20:16:35.418 INFO:teuthology.orchestra.run.vm08.stdout:uid [ unknown] Ceph automated package build (Ceph automated package build) 2026-03-09T20:16:35.418 INFO:teuthology.orchestra.run.vm08.stdout:uid [ unknown] Ceph.com (release key) 2026-03-09T20:16:35.419 INFO:teuthology.task.install.deb:Installing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on remote deb x86_64 2026-03-09T20:16:35.419 INFO:teuthology.task.install.deb:Installing system (non-project) packages: python3-xmltodict, python3-jmespath on remote deb x86_64 2026-03-09T20:16:35.419 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:16:35.928 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T20:16:35.928 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T20:16:36.024 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T20:16:36.024 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T20:16:36.059 INFO:teuthology.task.install.deb:Pulling from https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/ 2026-03-09T20:16:36.059 INFO:teuthology.task.install.deb:Package version is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T20:16:36.487 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T20:16:36.487 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T20:16:36.495 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-get update 2026-03-09T20:16:36.519 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T20:16:36.519 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T20:16:36.526 DEBUG:teuthology.orchestra.run.vm04:> sudo apt-get update 2026-03-09T20:16:36.615 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T20:16:36.615 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/apt/sources.list.d/ceph.list 2026-03-09T20:16:36.622 DEBUG:teuthology.orchestra.run.vm08:> sudo apt-get update 2026-03-09T20:16:36.678 INFO:teuthology.orchestra.run.vm03.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T20:16:36.683 INFO:teuthology.orchestra.run.vm03.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T20:16:36.691 INFO:teuthology.orchestra.run.vm03.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T20:16:36.784 INFO:teuthology.orchestra.run.vm03.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T20:16:36.795 INFO:teuthology.orchestra.run.vm08.stdout:Hit:1 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T20:16:36.798 INFO:teuthology.orchestra.run.vm08.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T20:16:36.806 INFO:teuthology.orchestra.run.vm08.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T20:16:36.813 INFO:teuthology.orchestra.run.vm08.stdout:Hit:4 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T20:16:36.817 INFO:teuthology.orchestra.run.vm04.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T20:16:36.846 INFO:teuthology.orchestra.run.vm04.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T20:16:36.878 INFO:teuthology.orchestra.run.vm04.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T20:16:37.074 INFO:teuthology.orchestra.run.vm04.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T20:16:37.112 INFO:teuthology.orchestra.run.vm03.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T20:16:37.143 INFO:teuthology.orchestra.run.vm04.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T20:16:37.183 INFO:teuthology.orchestra.run.vm08.stdout:Ign:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy InRelease 2026-03-09T20:16:37.226 INFO:teuthology.orchestra.run.vm03.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T20:16:37.264 INFO:teuthology.orchestra.run.vm04.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T20:16:37.299 INFO:teuthology.orchestra.run.vm08.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release [7662 B] 2026-03-09T20:16:37.340 INFO:teuthology.orchestra.run.vm03.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T20:16:37.386 INFO:teuthology.orchestra.run.vm04.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T20:16:37.415 INFO:teuthology.orchestra.run.vm08.stdout:Ign:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy Release.gpg 2026-03-09T20:16:37.454 INFO:teuthology.orchestra.run.vm03.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T20:16:37.507 INFO:teuthology.orchestra.run.vm04.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T20:16:37.530 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 25.8 kB in 1s (29.2 kB/s) 2026-03-09T20:16:37.532 INFO:teuthology.orchestra.run.vm08.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 Packages [18.1 kB] 2026-03-09T20:16:37.577 INFO:teuthology.orchestra.run.vm04.stdout:Fetched 25.8 kB in 1s (28.7 kB/s) 2026-03-09T20:16:37.618 INFO:teuthology.orchestra.run.vm08.stdout:Fetched 25.8 kB in 1s (30.8 kB/s) 2026-03-09T20:16:38.305 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:16:38.321 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T20:16:38.325 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:16:38.327 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:16:38.339 DEBUG:teuthology.orchestra.run.vm08:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T20:16:38.341 DEBUG:teuthology.orchestra.run.vm04:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install ceph=19.2.3-678-ge911bdeb-1jammy cephadm=19.2.3-678-ge911bdeb-1jammy ceph-mds=19.2.3-678-ge911bdeb-1jammy ceph-mgr=19.2.3-678-ge911bdeb-1jammy ceph-common=19.2.3-678-ge911bdeb-1jammy ceph-fuse=19.2.3-678-ge911bdeb-1jammy ceph-test=19.2.3-678-ge911bdeb-1jammy ceph-volume=19.2.3-678-ge911bdeb-1jammy radosgw=19.2.3-678-ge911bdeb-1jammy python3-rados=19.2.3-678-ge911bdeb-1jammy python3-rgw=19.2.3-678-ge911bdeb-1jammy python3-cephfs=19.2.3-678-ge911bdeb-1jammy python3-rbd=19.2.3-678-ge911bdeb-1jammy libcephfs2=19.2.3-678-ge911bdeb-1jammy libcephfs-dev=19.2.3-678-ge911bdeb-1jammy librados2=19.2.3-678-ge911bdeb-1jammy librbd1=19.2.3-678-ge911bdeb-1jammy rbd-fuse=19.2.3-678-ge911bdeb-1jammy 2026-03-09T20:16:38.357 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:16:38.373 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:16:38.374 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:16:38.565 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:16:38.566 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:16:38.578 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:16:38.579 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:16:38.581 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:16:38.582 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:16:38.785 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:16:38.786 INFO:teuthology.orchestra.run.vm08.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:16:38.786 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T20:16:38.786 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:16:38.787 INFO:teuthology.orchestra.run.vm08.stdout:The following additional packages will be installed: 2026-03-09T20:16:38.787 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T20:16:38.787 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T20:16:38.787 INFO:teuthology.orchestra.run.vm08.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:16:38.787 INFO:teuthology.orchestra.run.vm08.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T20:16:38.787 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T20:16:38.788 INFO:teuthology.orchestra.run.vm08.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T20:16:38.789 INFO:teuthology.orchestra.run.vm08.stdout:Suggested packages: 2026-03-09T20:16:38.789 INFO:teuthology.orchestra.run.vm08.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T20:16:38.789 INFO:teuthology.orchestra.run.vm08.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T20:16:38.789 INFO:teuthology.orchestra.run.vm08.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T20:16:38.789 INFO:teuthology.orchestra.run.vm08.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T20:16:38.789 INFO:teuthology.orchestra.run.vm08.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T20:16:38.789 INFO:teuthology.orchestra.run.vm08.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T20:16:38.789 INFO:teuthology.orchestra.run.vm08.stdout: smart-notifier mailx | mailutils 2026-03-09T20:16:38.789 INFO:teuthology.orchestra.run.vm08.stdout:Recommended packages: 2026-03-09T20:16:38.789 INFO:teuthology.orchestra.run.vm08.stdout: btrfs-tools 2026-03-09T20:16:38.798 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:16:38.798 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:16:38.799 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T20:16:38.799 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:16:38.800 INFO:teuthology.orchestra.run.vm03.stdout:The following additional packages will be installed: 2026-03-09T20:16:38.800 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T20:16:38.800 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T20:16:38.800 INFO:teuthology.orchestra.run.vm03.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:16:38.800 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T20:16:38.800 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T20:16:38.801 INFO:teuthology.orchestra.run.vm03.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T20:16:38.802 INFO:teuthology.orchestra.run.vm03.stdout:Suggested packages: 2026-03-09T20:16:38.802 INFO:teuthology.orchestra.run.vm03.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T20:16:38.802 INFO:teuthology.orchestra.run.vm03.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T20:16:38.802 INFO:teuthology.orchestra.run.vm03.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T20:16:38.802 INFO:teuthology.orchestra.run.vm03.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T20:16:38.802 INFO:teuthology.orchestra.run.vm03.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T20:16:38.802 INFO:teuthology.orchestra.run.vm03.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T20:16:38.802 INFO:teuthology.orchestra.run.vm03.stdout: smart-notifier mailx | mailutils 2026-03-09T20:16:38.802 INFO:teuthology.orchestra.run.vm03.stdout:Recommended packages: 2026-03-09T20:16:38.802 INFO:teuthology.orchestra.run.vm03.stdout: btrfs-tools 2026-03-09T20:16:38.823 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:16:38.823 INFO:teuthology.orchestra.run.vm04.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:16:38.823 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T20:16:38.823 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:16:38.824 INFO:teuthology.orchestra.run.vm04.stdout:The following additional packages will be installed: 2026-03-09T20:16:38.824 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base ceph-mgr-cephadm ceph-mgr-dashboard ceph-mgr-diskprediction-local 2026-03-09T20:16:38.824 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-k8sevents ceph-mgr-modules-core ceph-mon ceph-osd jq 2026-03-09T20:16:38.824 INFO:teuthology.orchestra.run.vm04.stdout: libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:16:38.824 INFO:teuthology.orchestra.run.vm04.stdout: liboath0 libonig5 libpcre2-16-0 libqt5core5a libqt5dbus5 libqt5network5 2026-03-09T20:16:38.824 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsqlite3-mod-ceph 2026-03-09T20:16:38.824 INFO:teuthology.orchestra.run.vm04.stdout: libthrift-0.16.0 lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:16:38.824 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:16:38.824 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:16:38.824 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T20:16:38.824 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:16:38.824 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-pytest python3-repoze.lru 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: python3-toml python3-waitress python3-wcwidth python3-webob 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: python3-websocket python3-webtest python3-werkzeug python3-zc.lockfile 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: qttranslations5-l10n smartmontools socat unzip xmlstarlet zip 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout:Suggested packages: 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: python3-influxdb readline-doc python3-beaker python-mako-doc 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: python-natsort-doc httpd-wsgi libapache2-mod-python libapache2-mod-scgi 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: libjs-mochikit python-pecan-doc python-psutil-doc subversion 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: python-pygments-doc ttf-bitstream-vera python-pyinotify-doc python3-dap 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: python-sklearn-doc ipython3 python-waitress-doc python-webob-doc 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: python-webtest-doc python-werkzeug-doc python3-watchdog gsmartcontrol 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: smart-notifier mailx | mailutils 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout:Recommended packages: 2026-03-09T20:16:38.825 INFO:teuthology.orchestra.run.vm04.stdout: btrfs-tools 2026-03-09T20:16:38.839 INFO:teuthology.orchestra.run.vm08.stdout:The following NEW packages will be installed: 2026-03-09T20:16:38.839 INFO:teuthology.orchestra.run.vm08.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T20:16:38.839 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T20:16:38.839 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T20:16:38.839 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T20:16:38.839 INFO:teuthology.orchestra.run.vm08.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T20:16:38.839 INFO:teuthology.orchestra.run.vm08.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T20:16:38.840 INFO:teuthology.orchestra.run.vm08.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T20:16:38.840 INFO:teuthology.orchestra.run.vm08.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T20:16:38.840 INFO:teuthology.orchestra.run.vm08.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:16:38.840 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T20:16:38.840 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T20:16:38.840 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:16:38.840 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:16:38.840 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:16:38.840 INFO:teuthology.orchestra.run.vm08.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T20:16:38.840 INFO:teuthology.orchestra.run.vm08.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T20:16:38.840 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T20:16:38.840 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T20:16:38.841 INFO:teuthology.orchestra.run.vm08.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T20:16:38.841 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T20:16:38.841 INFO:teuthology.orchestra.run.vm08.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T20:16:38.841 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:16:38.841 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T20:16:38.841 INFO:teuthology.orchestra.run.vm08.stdout: socat unzip xmlstarlet zip 2026-03-09T20:16:38.841 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be upgraded: 2026-03-09T20:16:38.842 INFO:teuthology.orchestra.run.vm08.stdout: librados2 librbd1 2026-03-09T20:16:38.853 INFO:teuthology.orchestra.run.vm03.stdout:The following NEW packages will be installed: 2026-03-09T20:16:38.853 INFO:teuthology.orchestra.run.vm03.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T20:16:38.853 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T20:16:38.853 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T20:16:38.853 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T20:16:38.853 INFO:teuthology.orchestra.run.vm03.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T20:16:38.853 INFO:teuthology.orchestra.run.vm03.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T20:16:38.853 INFO:teuthology.orchestra.run.vm03.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T20:16:38.854 INFO:teuthology.orchestra.run.vm03.stdout: socat unzip xmlstarlet zip 2026-03-09T20:16:38.855 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be upgraded: 2026-03-09T20:16:38.855 INFO:teuthology.orchestra.run.vm03.stdout: librados2 librbd1 2026-03-09T20:16:38.864 INFO:teuthology.orchestra.run.vm04.stdout:The following NEW packages will be installed: 2026-03-09T20:16:38.864 INFO:teuthology.orchestra.run.vm04.stdout: ceph ceph-base ceph-common ceph-fuse ceph-mds ceph-mgr ceph-mgr-cephadm 2026-03-09T20:16:38.864 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-dashboard ceph-mgr-diskprediction-local ceph-mgr-k8sevents 2026-03-09T20:16:38.865 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core ceph-mon ceph-osd ceph-test ceph-volume cephadm jq 2026-03-09T20:16:38.865 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-dev libcephfs2 libdouble-conversion3 libfuse2 libjq1 liblttng-ust1 2026-03-09T20:16:38.865 INFO:teuthology.orchestra.run.vm04.stdout: liblua5.3-dev libnbd0 liboath0 libonig5 libpcre2-16-0 libqt5core5a 2026-03-09T20:16:38.865 INFO:teuthology.orchestra.run.vm04.stdout: libqt5dbus5 libqt5network5 libradosstriper1 librdkafka1 libreadline-dev 2026-03-09T20:16:38.865 INFO:teuthology.orchestra.run.vm04.stdout: librgw2 libsqlite3-mod-ceph libthrift-0.16.0 lua-any lua-sec lua-socket 2026-03-09T20:16:38.865 INFO:teuthology.orchestra.run.vm04.stdout: lua5.1 luarocks nvme-cli pkg-config python-asyncssh-doc 2026-03-09T20:16:38.865 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:16:38.865 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-argparse python3-ceph-common python3-cephfs python3-cheroot 2026-03-09T20:16:38.865 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-iniconfig 2026-03-09T20:16:38.865 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:16:38.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:16:38.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:16:38.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-pluggy python3-portend 2026-03-09T20:16:38.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-prettytable python3-psutil python3-py python3-pygments 2026-03-09T20:16:38.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-pytest python3-rados python3-rbd 2026-03-09T20:16:38.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-rgw python3-routes 2026-03-09T20:16:38.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-rsa python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T20:16:38.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T20:16:38.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-toml python3-waitress python3-wcwidth 2026-03-09T20:16:38.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:16:38.866 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile qttranslations5-l10n radosgw rbd-fuse smartmontools 2026-03-09T20:16:38.866 INFO:teuthology.orchestra.run.vm04.stdout: socat unzip xmlstarlet zip 2026-03-09T20:16:38.866 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be upgraded: 2026-03-09T20:16:38.866 INFO:teuthology.orchestra.run.vm04.stdout: librados2 librbd1 2026-03-09T20:16:39.055 INFO:teuthology.orchestra.run.vm08.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:16:39.055 INFO:teuthology.orchestra.run.vm08.stdout:Need to get 178 MB of archives. 2026-03-09T20:16:39.055 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T20:16:39.055 INFO:teuthology.orchestra.run.vm08.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T20:16:39.061 INFO:teuthology.orchestra.run.vm03.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:16:39.061 INFO:teuthology.orchestra.run.vm03.stdout:Need to get 178 MB of archives. 2026-03-09T20:16:39.061 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T20:16:39.061 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T20:16:39.228 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T20:16:39.231 INFO:teuthology.orchestra.run.vm08.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T20:16:39.232 INFO:teuthology.orchestra.run.vm03.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T20:16:39.236 INFO:teuthology.orchestra.run.vm08.stdout:Get:3 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T20:16:39.267 INFO:teuthology.orchestra.run.vm03.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T20:16:39.271 INFO:teuthology.orchestra.run.vm08.stdout:Get:4 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T20:16:39.332 INFO:teuthology.orchestra.run.vm04.stdout:2 upgraded, 107 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:16:39.332 INFO:teuthology.orchestra.run.vm04.stdout:Need to get 178 MB of archives. 2026-03-09T20:16:39.333 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 782 MB of additional disk space will be used. 2026-03-09T20:16:39.333 INFO:teuthology.orchestra.run.vm04.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblttng-ust1 amd64 2.13.1-1ubuntu1 [190 kB] 2026-03-09T20:16:39.368 INFO:teuthology.orchestra.run.vm03.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T20:16:39.373 INFO:teuthology.orchestra.run.vm03.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T20:16:39.376 INFO:teuthology.orchestra.run.vm08.stdout:Get:5 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T20:16:39.380 INFO:teuthology.orchestra.run.vm08.stdout:Get:6 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T20:16:39.386 INFO:teuthology.orchestra.run.vm03.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T20:16:39.391 INFO:teuthology.orchestra.run.vm03.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T20:16:39.392 INFO:teuthology.orchestra.run.vm03.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T20:16:39.392 INFO:teuthology.orchestra.run.vm03.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T20:16:39.392 INFO:teuthology.orchestra.run.vm03.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T20:16:39.395 INFO:teuthology.orchestra.run.vm08.stdout:Get:7 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T20:16:39.400 INFO:teuthology.orchestra.run.vm08.stdout:Get:8 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T20:16:39.402 INFO:teuthology.orchestra.run.vm08.stdout:Get:9 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T20:16:39.402 INFO:teuthology.orchestra.run.vm08.stdout:Get:10 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T20:16:39.403 INFO:teuthology.orchestra.run.vm08.stdout:Get:11 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T20:16:39.404 INFO:teuthology.orchestra.run.vm03.stdout:Get:12 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T20:16:39.405 INFO:teuthology.orchestra.run.vm08.stdout:Get:12 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T20:16:39.407 INFO:teuthology.orchestra.run.vm03.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T20:16:39.409 INFO:teuthology.orchestra.run.vm03.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T20:16:39.413 INFO:teuthology.orchestra.run.vm08.stdout:Get:13 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T20:16:39.414 INFO:teuthology.orchestra.run.vm08.stdout:Get:14 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T20:16:39.416 INFO:teuthology.orchestra.run.vm08.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T20:16:39.443 INFO:teuthology.orchestra.run.vm03.stdout:Get:15 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T20:16:39.443 INFO:teuthology.orchestra.run.vm03.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T20:16:39.445 INFO:teuthology.orchestra.run.vm03.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T20:16:39.446 INFO:teuthology.orchestra.run.vm03.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T20:16:39.448 INFO:teuthology.orchestra.run.vm03.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T20:16:39.448 INFO:teuthology.orchestra.run.vm03.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T20:16:39.448 INFO:teuthology.orchestra.run.vm03.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T20:16:39.449 INFO:teuthology.orchestra.run.vm03.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T20:16:39.451 INFO:teuthology.orchestra.run.vm08.stdout:Get:16 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T20:16:39.452 INFO:teuthology.orchestra.run.vm08.stdout:Get:17 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T20:16:39.453 INFO:teuthology.orchestra.run.vm08.stdout:Get:18 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T20:16:39.455 INFO:teuthology.orchestra.run.vm08.stdout:Get:19 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T20:16:39.456 INFO:teuthology.orchestra.run.vm08.stdout:Get:20 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T20:16:39.457 INFO:teuthology.orchestra.run.vm08.stdout:Get:21 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T20:16:39.457 INFO:teuthology.orchestra.run.vm08.stdout:Get:22 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T20:16:39.458 INFO:teuthology.orchestra.run.vm03.stdout:Get:23 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T20:16:39.458 INFO:teuthology.orchestra.run.vm08.stdout:Get:23 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T20:16:39.462 INFO:teuthology.orchestra.run.vm04.stdout:Get:2 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librbd1 amd64 19.2.3-678-ge911bdeb-1jammy [3257 kB] 2026-03-09T20:16:39.484 INFO:teuthology.orchestra.run.vm03.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T20:16:39.484 INFO:teuthology.orchestra.run.vm03.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T20:16:39.484 INFO:teuthology.orchestra.run.vm03.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T20:16:39.485 INFO:teuthology.orchestra.run.vm03.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T20:16:39.485 INFO:teuthology.orchestra.run.vm03.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T20:16:39.485 INFO:teuthology.orchestra.run.vm03.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T20:16:39.507 INFO:teuthology.orchestra.run.vm08.stdout:Get:24 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T20:16:39.507 INFO:teuthology.orchestra.run.vm08.stdout:Get:25 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T20:16:39.508 INFO:teuthology.orchestra.run.vm08.stdout:Get:26 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T20:16:39.508 INFO:teuthology.orchestra.run.vm08.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T20:16:39.520 INFO:teuthology.orchestra.run.vm03.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T20:16:39.520 INFO:teuthology.orchestra.run.vm03.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T20:16:39.520 INFO:teuthology.orchestra.run.vm03.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T20:16:39.521 INFO:teuthology.orchestra.run.vm03.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T20:16:39.544 INFO:teuthology.orchestra.run.vm08.stdout:Get:28 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T20:16:39.544 INFO:teuthology.orchestra.run.vm08.stdout:Get:29 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T20:16:39.546 INFO:teuthology.orchestra.run.vm08.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T20:16:39.547 INFO:teuthology.orchestra.run.vm08.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T20:16:39.547 INFO:teuthology.orchestra.run.vm08.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T20:16:39.548 INFO:teuthology.orchestra.run.vm08.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T20:16:39.555 INFO:teuthology.orchestra.run.vm03.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T20:16:39.556 INFO:teuthology.orchestra.run.vm03.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T20:16:39.556 INFO:teuthology.orchestra.run.vm03.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T20:16:39.557 INFO:teuthology.orchestra.run.vm03.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T20:16:39.557 INFO:teuthology.orchestra.run.vm03.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T20:16:39.561 INFO:teuthology.orchestra.run.vm03.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T20:16:39.581 INFO:teuthology.orchestra.run.vm08.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T20:16:39.581 INFO:teuthology.orchestra.run.vm08.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T20:16:39.582 INFO:teuthology.orchestra.run.vm08.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T20:16:39.582 INFO:teuthology.orchestra.run.vm08.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T20:16:39.591 INFO:teuthology.orchestra.run.vm03.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T20:16:39.592 INFO:teuthology.orchestra.run.vm03.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T20:16:39.592 INFO:teuthology.orchestra.run.vm03.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T20:16:39.593 INFO:teuthology.orchestra.run.vm03.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T20:16:39.618 INFO:teuthology.orchestra.run.vm08.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T20:16:39.623 INFO:teuthology.orchestra.run.vm08.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T20:16:39.623 INFO:teuthology.orchestra.run.vm08.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T20:16:39.623 INFO:teuthology.orchestra.run.vm08.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T20:16:39.624 INFO:teuthology.orchestra.run.vm08.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T20:16:39.625 INFO:teuthology.orchestra.run.vm08.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T20:16:39.627 INFO:teuthology.orchestra.run.vm03.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T20:16:39.628 INFO:teuthology.orchestra.run.vm03.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T20:16:39.630 INFO:teuthology.orchestra.run.vm03.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T20:16:39.630 INFO:teuthology.orchestra.run.vm03.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T20:16:39.631 INFO:teuthology.orchestra.run.vm03.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T20:16:39.655 INFO:teuthology.orchestra.run.vm08.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T20:16:39.656 INFO:teuthology.orchestra.run.vm08.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T20:16:39.657 INFO:teuthology.orchestra.run.vm08.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T20:16:39.658 INFO:teuthology.orchestra.run.vm08.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T20:16:39.685 INFO:teuthology.orchestra.run.vm03.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T20:16:39.687 INFO:teuthology.orchestra.run.vm03.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T20:16:39.687 INFO:teuthology.orchestra.run.vm03.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T20:16:39.692 INFO:teuthology.orchestra.run.vm08.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T20:16:39.695 INFO:teuthology.orchestra.run.vm03.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T20:16:39.695 INFO:teuthology.orchestra.run.vm03.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T20:16:39.703 INFO:teuthology.orchestra.run.vm03.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T20:16:39.704 INFO:teuthology.orchestra.run.vm03.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T20:16:39.704 INFO:teuthology.orchestra.run.vm03.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T20:16:39.705 INFO:teuthology.orchestra.run.vm03.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T20:16:39.768 INFO:teuthology.orchestra.run.vm08.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T20:16:39.769 INFO:teuthology.orchestra.run.vm08.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T20:16:39.769 INFO:teuthology.orchestra.run.vm08.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T20:16:39.777 INFO:teuthology.orchestra.run.vm08.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T20:16:39.777 INFO:teuthology.orchestra.run.vm08.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T20:16:39.777 INFO:teuthology.orchestra.run.vm03.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T20:16:39.778 INFO:teuthology.orchestra.run.vm08.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T20:16:39.778 INFO:teuthology.orchestra.run.vm08.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T20:16:39.778 INFO:teuthology.orchestra.run.vm03.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T20:16:39.779 INFO:teuthology.orchestra.run.vm08.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T20:16:39.779 INFO:teuthology.orchestra.run.vm03.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T20:16:39.779 INFO:teuthology.orchestra.run.vm08.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T20:16:39.780 INFO:teuthology.orchestra.run.vm03.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T20:16:39.781 INFO:teuthology.orchestra.run.vm03.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T20:16:39.808 INFO:teuthology.orchestra.run.vm08.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T20:16:39.811 INFO:teuthology.orchestra.run.vm03.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T20:16:39.812 INFO:teuthology.orchestra.run.vm03.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T20:16:39.812 INFO:teuthology.orchestra.run.vm03.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T20:16:39.817 INFO:teuthology.orchestra.run.vm03.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T20:16:39.817 INFO:teuthology.orchestra.run.vm03.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T20:16:39.820 INFO:teuthology.orchestra.run.vm08.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T20:16:39.822 INFO:teuthology.orchestra.run.vm08.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T20:16:39.822 INFO:teuthology.orchestra.run.vm08.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T20:16:39.825 INFO:teuthology.orchestra.run.vm08.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T20:16:39.827 INFO:teuthology.orchestra.run.vm08.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T20:16:39.852 INFO:teuthology.orchestra.run.vm03.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T20:16:39.853 INFO:teuthology.orchestra.run.vm03.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T20:16:39.853 INFO:teuthology.orchestra.run.vm03.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T20:16:39.854 INFO:teuthology.orchestra.run.vm03.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T20:16:39.855 INFO:teuthology.orchestra.run.vm03.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T20:16:39.856 INFO:teuthology.orchestra.run.vm03.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T20:16:39.863 INFO:teuthology.orchestra.run.vm03.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T20:16:39.882 INFO:teuthology.orchestra.run.vm08.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T20:16:39.883 INFO:teuthology.orchestra.run.vm08.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T20:16:39.888 INFO:teuthology.orchestra.run.vm03.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T20:16:39.888 INFO:teuthology.orchestra.run.vm03.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T20:16:39.891 INFO:teuthology.orchestra.run.vm03.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T20:16:39.924 INFO:teuthology.orchestra.run.vm03.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T20:16:39.956 INFO:teuthology.orchestra.run.vm08.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T20:16:39.957 INFO:teuthology.orchestra.run.vm08.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T20:16:39.994 INFO:teuthology.orchestra.run.vm08.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T20:16:39.994 INFO:teuthology.orchestra.run.vm08.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T20:16:39.994 INFO:teuthology.orchestra.run.vm08.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T20:16:39.995 INFO:teuthology.orchestra.run.vm08.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T20:16:39.996 INFO:teuthology.orchestra.run.vm08.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T20:16:39.996 INFO:teuthology.orchestra.run.vm08.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T20:16:40.004 INFO:teuthology.orchestra.run.vm08.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T20:16:40.028 INFO:teuthology.orchestra.run.vm08.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T20:16:40.028 INFO:teuthology.orchestra.run.vm08.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T20:16:40.065 INFO:teuthology.orchestra.run.vm08.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T20:16:40.065 INFO:teuthology.orchestra.run.vm08.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T20:16:40.070 INFO:teuthology.orchestra.run.vm03.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T20:16:40.140 INFO:teuthology.orchestra.run.vm08.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T20:16:40.281 INFO:teuthology.orchestra.run.vm03.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T20:16:40.323 INFO:teuthology.orchestra.run.vm04.stdout:Get:3 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T20:16:40.401 INFO:teuthology.orchestra.run.vm03.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T20:16:40.413 INFO:teuthology.orchestra.run.vm03.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T20:16:40.417 INFO:teuthology.orchestra.run.vm03.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T20:16:40.417 INFO:teuthology.orchestra.run.vm03.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T20:16:40.420 INFO:teuthology.orchestra.run.vm03.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T20:16:40.421 INFO:teuthology.orchestra.run.vm03.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T20:16:40.432 INFO:teuthology.orchestra.run.vm03.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T20:16:40.562 INFO:teuthology.orchestra.run.vm04.stdout:Get:4 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T20:16:40.565 INFO:teuthology.orchestra.run.vm04.stdout:Get:5 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T20:16:40.566 INFO:teuthology.orchestra.run.vm04.stdout:Get:6 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T20:16:40.567 INFO:teuthology.orchestra.run.vm04.stdout:Get:7 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T20:16:40.567 INFO:teuthology.orchestra.run.vm04.stdout:Get:8 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T20:16:40.568 INFO:teuthology.orchestra.run.vm04.stdout:Get:9 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T20:16:40.578 INFO:teuthology.orchestra.run.vm04.stdout:Get:10 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T20:16:40.677 INFO:teuthology.orchestra.run.vm03.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T20:16:40.744 INFO:teuthology.orchestra.run.vm03.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T20:16:40.749 INFO:teuthology.orchestra.run.vm03.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T20:16:40.932 INFO:teuthology.orchestra.run.vm04.stdout:Get:11 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T20:16:40.933 INFO:teuthology.orchestra.run.vm04.stdout:Get:12 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T20:16:40.941 INFO:teuthology.orchestra.run.vm04.stdout:Get:13 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T20:16:41.233 INFO:teuthology.orchestra.run.vm08.stdout:Get:80 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librados2 amd64 19.2.3-678-ge911bdeb-1jammy [3597 kB] 2026-03-09T20:16:41.380 INFO:teuthology.orchestra.run.vm08.stdout:Get:81 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs2 amd64 19.2.3-678-ge911bdeb-1jammy [979 kB] 2026-03-09T20:16:41.482 INFO:teuthology.orchestra.run.vm08.stdout:Get:82 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rados amd64 19.2.3-678-ge911bdeb-1jammy [357 kB] 2026-03-09T20:16:41.489 INFO:teuthology.orchestra.run.vm08.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-argparse all 19.2.3-678-ge911bdeb-1jammy [32.9 kB] 2026-03-09T20:16:41.490 INFO:teuthology.orchestra.run.vm08.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-cephfs amd64 19.2.3-678-ge911bdeb-1jammy [184 kB] 2026-03-09T20:16:41.494 INFO:teuthology.orchestra.run.vm08.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-ceph-common all 19.2.3-678-ge911bdeb-1jammy [70.1 kB] 2026-03-09T20:16:41.495 INFO:teuthology.orchestra.run.vm08.stdout:Get:86 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rbd amd64 19.2.3-678-ge911bdeb-1jammy [334 kB] 2026-03-09T20:16:41.503 INFO:teuthology.orchestra.run.vm08.stdout:Get:87 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 librgw2 amd64 19.2.3-678-ge911bdeb-1jammy [6935 kB] 2026-03-09T20:16:41.785 INFO:teuthology.orchestra.run.vm03.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T20:16:41.893 INFO:teuthology.orchestra.run.vm08.stdout:Get:88 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 python3-rgw amd64 19.2.3-678-ge911bdeb-1jammy [112 kB] 2026-03-09T20:16:41.946 INFO:teuthology.orchestra.run.vm08.stdout:Get:89 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libradosstriper1 amd64 19.2.3-678-ge911bdeb-1jammy [470 kB] 2026-03-09T20:16:41.969 INFO:teuthology.orchestra.run.vm08.stdout:Get:90 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-common amd64 19.2.3-678-ge911bdeb-1jammy [26.5 MB] 2026-03-09T20:16:42.025 INFO:teuthology.orchestra.run.vm03.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T20:16:42.030 INFO:teuthology.orchestra.run.vm03.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T20:16:42.031 INFO:teuthology.orchestra.run.vm03.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T20:16:42.050 INFO:teuthology.orchestra.run.vm03.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T20:16:42.314 INFO:teuthology.orchestra.run.vm03.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T20:16:42.332 INFO:teuthology.orchestra.run.vm04.stdout:Get:14 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T20:16:42.593 INFO:teuthology.orchestra.run.vm04.stdout:Get:15 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T20:16:42.598 INFO:teuthology.orchestra.run.vm04.stdout:Get:16 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T20:16:42.601 INFO:teuthology.orchestra.run.vm04.stdout:Get:17 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T20:16:42.629 INFO:teuthology.orchestra.run.vm04.stdout:Get:18 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T20:16:42.928 INFO:teuthology.orchestra.run.vm04.stdout:Get:19 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T20:16:43.367 INFO:teuthology.orchestra.run.vm03.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T20:16:43.367 INFO:teuthology.orchestra.run.vm03.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T20:16:43.392 INFO:teuthology.orchestra.run.vm03.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T20:16:43.427 INFO:teuthology.orchestra.run.vm08.stdout:Get:91 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-base amd64 19.2.3-678-ge911bdeb-1jammy [5178 kB] 2026-03-09T20:16:43.518 INFO:teuthology.orchestra.run.vm03.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T20:16:43.538 INFO:teuthology.orchestra.run.vm03.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T20:16:43.614 INFO:teuthology.orchestra.run.vm03.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T20:16:43.665 INFO:teuthology.orchestra.run.vm03.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T20:16:43.698 INFO:teuthology.orchestra.run.vm08.stdout:Get:92 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-modules-core all 19.2.3-678-ge911bdeb-1jammy [248 kB] 2026-03-09T20:16:43.707 INFO:teuthology.orchestra.run.vm08.stdout:Get:93 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libsqlite3-mod-ceph amd64 19.2.3-678-ge911bdeb-1jammy [125 kB] 2026-03-09T20:16:43.709 INFO:teuthology.orchestra.run.vm08.stdout:Get:94 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr amd64 19.2.3-678-ge911bdeb-1jammy [1081 kB] 2026-03-09T20:16:43.793 INFO:teuthology.orchestra.run.vm08.stdout:Get:95 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mon amd64 19.2.3-678-ge911bdeb-1jammy [6239 kB] 2026-03-09T20:16:44.030 INFO:teuthology.orchestra.run.vm04.stdout:Get:20 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T20:16:44.030 INFO:teuthology.orchestra.run.vm04.stdout:Get:21 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T20:16:44.076 INFO:teuthology.orchestra.run.vm03.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T20:16:44.076 INFO:teuthology.orchestra.run.vm03.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T20:16:44.105 INFO:teuthology.orchestra.run.vm08.stdout:Get:96 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-osd amd64 19.2.3-678-ge911bdeb-1jammy [23.0 MB] 2026-03-09T20:16:44.134 INFO:teuthology.orchestra.run.vm04.stdout:Get:22 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T20:16:44.263 INFO:teuthology.orchestra.run.vm04.stdout:Get:23 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T20:16:44.280 INFO:teuthology.orchestra.run.vm04.stdout:Get:24 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T20:16:44.282 INFO:teuthology.orchestra.run.vm04.stdout:Get:25 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T20:16:44.414 INFO:teuthology.orchestra.run.vm04.stdout:Get:26 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T20:16:44.834 INFO:teuthology.orchestra.run.vm04.stdout:Get:27 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libdouble-conversion3 amd64 3.1.7-4 [39.0 kB] 2026-03-09T20:16:44.834 INFO:teuthology.orchestra.run.vm04.stdout:Get:28 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T20:16:44.834 INFO:teuthology.orchestra.run.vm04.stdout:Get:29 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T20:16:44.927 INFO:teuthology.orchestra.run.vm04.stdout:Get:30 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libpcre2-16-0 amd64 10.39-3ubuntu0.1 [203 kB] 2026-03-09T20:16:45.056 INFO:teuthology.orchestra.run.vm04.stdout:Get:31 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5core5a amd64 5.15.3+dfsg-2ubuntu0.2 [2006 kB] 2026-03-09T20:16:45.221 INFO:teuthology.orchestra.run.vm04.stdout:Get:32 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5dbus5 amd64 5.15.3+dfsg-2ubuntu0.2 [222 kB] 2026-03-09T20:16:45.260 INFO:teuthology.orchestra.run.vm04.stdout:Get:33 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 libqt5network5 amd64 5.15.3+dfsg-2ubuntu0.2 [731 kB] 2026-03-09T20:16:45.307 INFO:teuthology.orchestra.run.vm04.stdout:Get:34 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libthrift-0.16.0 amd64 0.16.0-2 [267 kB] 2026-03-09T20:16:45.333 INFO:teuthology.orchestra.run.vm08.stdout:Get:97 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph amd64 19.2.3-678-ge911bdeb-1jammy [14.2 kB] 2026-03-09T20:16:45.333 INFO:teuthology.orchestra.run.vm08.stdout:Get:98 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-fuse amd64 19.2.3-678-ge911bdeb-1jammy [1173 kB] 2026-03-09T20:16:45.346 INFO:teuthology.orchestra.run.vm04.stdout:Get:35 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libnbd0 amd64 1.10.5-1 [71.3 kB] 2026-03-09T20:16:45.381 INFO:teuthology.orchestra.run.vm08.stdout:Get:99 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mds amd64 19.2.3-678-ge911bdeb-1jammy [2503 kB] 2026-03-09T20:16:45.384 INFO:teuthology.orchestra.run.vm04.stdout:Get:36 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-wcwidth all 0.2.5+dfsg1-1 [21.9 kB] 2026-03-09T20:16:45.420 INFO:teuthology.orchestra.run.vm04.stdout:Get:37 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-prettytable all 2.5.0-2 [31.3 kB] 2026-03-09T20:16:45.457 INFO:teuthology.orchestra.run.vm04.stdout:Get:38 https://archive.ubuntu.com/ubuntu jammy/universe amd64 librdkafka1 amd64 1.8.0-1build1 [633 kB] 2026-03-09T20:16:45.502 INFO:teuthology.orchestra.run.vm04.stdout:Get:39 https://archive.ubuntu.com/ubuntu jammy/main amd64 libreadline-dev amd64 8.1.2-1 [166 kB] 2026-03-09T20:16:45.516 INFO:teuthology.orchestra.run.vm08.stdout:Get:100 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 cephadm amd64 19.2.3-678-ge911bdeb-1jammy [798 kB] 2026-03-09T20:16:45.541 INFO:teuthology.orchestra.run.vm04.stdout:Get:40 https://archive.ubuntu.com/ubuntu jammy/main amd64 liblua5.3-dev amd64 5.3.6-1build1 [167 kB] 2026-03-09T20:16:45.546 INFO:teuthology.orchestra.run.vm08.stdout:Get:101 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-cephadm all 19.2.3-678-ge911bdeb-1jammy [157 kB] 2026-03-09T20:16:45.555 INFO:teuthology.orchestra.run.vm08.stdout:Get:102 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-dashboard all 19.2.3-678-ge911bdeb-1jammy [2396 kB] 2026-03-09T20:16:45.580 INFO:teuthology.orchestra.run.vm04.stdout:Get:41 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua5.1 amd64 5.1.5-8.1build4 [94.6 kB] 2026-03-09T20:16:45.617 INFO:teuthology.orchestra.run.vm04.stdout:Get:42 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-any all 27ubuntu1 [5034 B] 2026-03-09T20:16:45.653 INFO:teuthology.orchestra.run.vm04.stdout:Get:43 https://archive.ubuntu.com/ubuntu jammy/main amd64 zip amd64 3.0-12build2 [176 kB] 2026-03-09T20:16:45.681 INFO:teuthology.orchestra.run.vm08.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-diskprediction-local all 19.2.3-678-ge911bdeb-1jammy [8625 kB] 2026-03-09T20:16:45.691 INFO:teuthology.orchestra.run.vm04.stdout:Get:44 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 unzip amd64 6.0-26ubuntu3.2 [175 kB] 2026-03-09T20:16:45.729 INFO:teuthology.orchestra.run.vm04.stdout:Get:45 https://archive.ubuntu.com/ubuntu jammy/universe amd64 luarocks all 3.8.0+dfsg1-1 [140 kB] 2026-03-09T20:16:45.767 INFO:teuthology.orchestra.run.vm04.stdout:Get:46 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 liboath0 amd64 2.6.7-3ubuntu0.1 [41.3 kB] 2026-03-09T20:16:45.804 INFO:teuthology.orchestra.run.vm04.stdout:Get:47 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.functools all 3.4.0-2 [9030 B] 2026-03-09T20:16:45.840 INFO:teuthology.orchestra.run.vm04.stdout:Get:48 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-cheroot all 8.5.2+ds1-1ubuntu3.1 [71.1 kB] 2026-03-09T20:16:45.876 INFO:teuthology.orchestra.run.vm04.stdout:Get:49 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.classes all 3.2.1-3 [6452 B] 2026-03-09T20:16:45.912 INFO:teuthology.orchestra.run.vm04.stdout:Get:50 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.text all 3.6.0-2 [8716 B] 2026-03-09T20:16:45.948 INFO:teuthology.orchestra.run.vm04.stdout:Get:51 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jaraco.collections all 3.4.0-2 [11.4 kB] 2026-03-09T20:16:46.060 INFO:teuthology.orchestra.run.vm04.stdout:Get:52 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempora all 4.1.2-1 [14.8 kB] 2026-03-09T20:16:46.096 INFO:teuthology.orchestra.run.vm04.stdout:Get:53 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-portend all 3.0.0-1 [7240 B] 2026-03-09T20:16:46.126 INFO:teuthology.orchestra.run.vm08.stdout:Get:104 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-mgr-k8sevents all 19.2.3-678-ge911bdeb-1jammy [14.3 kB] 2026-03-09T20:16:46.127 INFO:teuthology.orchestra.run.vm08.stdout:Get:105 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-test amd64 19.2.3-678-ge911bdeb-1jammy [52.1 MB] 2026-03-09T20:16:46.132 INFO:teuthology.orchestra.run.vm04.stdout:Get:54 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-zc.lockfile all 2.0-1 [8980 B] 2026-03-09T20:16:46.167 INFO:teuthology.orchestra.run.vm04.stdout:Get:55 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cherrypy3 all 18.6.1-4 [208 kB] 2026-03-09T20:16:46.206 INFO:teuthology.orchestra.run.vm04.stdout:Get:56 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-natsort all 8.0.2-1 [35.3 kB] 2026-03-09T20:16:46.242 INFO:teuthology.orchestra.run.vm04.stdout:Get:57 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-logutils all 0.3.3-8 [17.6 kB] 2026-03-09T20:16:46.278 INFO:teuthology.orchestra.run.vm04.stdout:Get:58 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-mako all 1.1.3+ds1-2ubuntu0.1 [60.5 kB] 2026-03-09T20:16:46.315 INFO:teuthology.orchestra.run.vm04.stdout:Get:59 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplegeneric all 0.8.1-3 [11.3 kB] 2026-03-09T20:16:46.351 INFO:teuthology.orchestra.run.vm04.stdout:Get:60 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-singledispatch all 3.4.0.3-3 [7320 B] 2026-03-09T20:16:46.387 INFO:teuthology.orchestra.run.vm04.stdout:Get:61 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-webob all 1:1.8.6-1.1ubuntu0.1 [86.7 kB] 2026-03-09T20:16:46.425 INFO:teuthology.orchestra.run.vm04.stdout:Get:62 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-waitress all 1.4.4-1.1ubuntu1.1 [47.0 kB] 2026-03-09T20:16:46.461 INFO:teuthology.orchestra.run.vm04.stdout:Get:63 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-tempita all 0.5.2-6ubuntu1 [15.1 kB] 2026-03-09T20:16:46.497 INFO:teuthology.orchestra.run.vm04.stdout:Get:64 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-paste all 3.5.0+dfsg1-1 [456 kB] 2026-03-09T20:16:46.539 INFO:teuthology.orchestra.run.vm04.stdout:Get:65 https://archive.ubuntu.com/ubuntu jammy/main amd64 python-pastedeploy-tpl all 2.1.1-1 [4892 B] 2026-03-09T20:16:46.575 INFO:teuthology.orchestra.run.vm03.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T20:16:46.576 INFO:teuthology.orchestra.run.vm04.stdout:Get:66 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastedeploy all 2.1.1-1 [26.6 kB] 2026-03-09T20:16:46.576 INFO:teuthology.orchestra.run.vm03.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T20:16:46.611 INFO:teuthology.orchestra.run.vm04.stdout:Get:67 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-webtest all 2.0.35-1 [28.5 kB] 2026-03-09T20:16:46.646 INFO:teuthology.orchestra.run.vm03.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T20:16:46.647 INFO:teuthology.orchestra.run.vm04.stdout:Get:68 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pecan all 1.3.3-4ubuntu2 [87.3 kB] 2026-03-09T20:16:46.685 INFO:teuthology.orchestra.run.vm04.stdout:Get:69 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-werkzeug all 2.0.2+dfsg1-1ubuntu0.22.04.3 [181 kB] 2026-03-09T20:16:46.723 INFO:teuthology.orchestra.run.vm04.stdout:Get:70 https://archive.ubuntu.com/ubuntu jammy/universe amd64 libfuse2 amd64 2.9.9-5ubuntu3 [90.3 kB] 2026-03-09T20:16:46.761 INFO:teuthology.orchestra.run.vm04.stdout:Get:71 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python3-asyncssh all 2.5.0-1ubuntu0.1 [189 kB] 2026-03-09T20:16:46.801 INFO:teuthology.orchestra.run.vm04.stdout:Get:72 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-repoze.lru all 0.7-2 [12.1 kB] 2026-03-09T20:16:46.838 INFO:teuthology.orchestra.run.vm04.stdout:Get:73 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-routes all 2.5.1-1ubuntu1 [89.0 kB] 2026-03-09T20:16:46.875 INFO:teuthology.orchestra.run.vm04.stdout:Get:74 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn-lib amd64 0.23.2-5ubuntu6 [2058 kB] 2026-03-09T20:16:46.990 INFO:teuthology.orchestra.run.vm04.stdout:Get:75 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-joblib all 0.17.0-4ubuntu1 [204 kB] 2026-03-09T20:16:47.108 INFO:teuthology.orchestra.run.vm04.stdout:Get:76 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-threadpoolctl all 3.1.0-1 [21.3 kB] 2026-03-09T20:16:47.144 INFO:teuthology.orchestra.run.vm04.stdout:Get:77 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-sklearn all 0.23.2-5ubuntu6 [1829 kB] 2026-03-09T20:16:47.232 INFO:teuthology.orchestra.run.vm03.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T20:16:47.257 INFO:teuthology.orchestra.run.vm04.stdout:Get:78 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-cachetools all 5.0.0-1 [9722 B] 2026-03-09T20:16:47.293 INFO:teuthology.orchestra.run.vm04.stdout:Get:79 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-rsa all 4.8-1 [28.4 kB] 2026-03-09T20:16:47.329 INFO:teuthology.orchestra.run.vm04.stdout:Get:80 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-google-auth all 1.5.1-3 [35.7 kB] 2026-03-09T20:16:47.366 INFO:teuthology.orchestra.run.vm04.stdout:Get:81 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-requests-oauthlib all 1.3.0+ds-0.1 [18.7 kB] 2026-03-09T20:16:47.402 INFO:teuthology.orchestra.run.vm04.stdout:Get:82 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-websocket all 1.2.3-1 [34.7 kB] 2026-03-09T20:16:47.417 INFO:teuthology.orchestra.run.vm04.stdout:Get:83 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T20:16:47.418 INFO:teuthology.orchestra.run.vm04.stdout:Get:84 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T20:16:47.418 INFO:teuthology.orchestra.run.vm04.stdout:Get:85 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T20:16:47.438 INFO:teuthology.orchestra.run.vm04.stdout:Get:86 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-kubernetes all 12.0.1-1ubuntu1 [353 kB] 2026-03-09T20:16:47.478 INFO:teuthology.orchestra.run.vm04.stdout:Get:87 https://archive.ubuntu.com/ubuntu jammy/main amd64 libonig5 amd64 6.9.7.1-2build1 [172 kB] 2026-03-09T20:16:47.516 INFO:teuthology.orchestra.run.vm04.stdout:Get:88 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 libjq1 amd64 1.6-2.1ubuntu3.1 [133 kB] 2026-03-09T20:16:47.524 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 178 MB in 8s (21.3 MB/s) 2026-03-09T20:16:47.553 INFO:teuthology.orchestra.run.vm04.stdout:Get:89 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 jq amd64 1.6-2.1ubuntu3.1 [52.5 kB] 2026-03-09T20:16:47.589 INFO:teuthology.orchestra.run.vm04.stdout:Get:90 https://archive.ubuntu.com/ubuntu jammy/main amd64 socat amd64 1.7.4.1-3ubuntu4 [349 kB] 2026-03-09T20:16:47.630 INFO:teuthology.orchestra.run.vm04.stdout:Get:91 https://archive.ubuntu.com/ubuntu jammy/universe amd64 xmlstarlet amd64 1.6.1-2.1 [265 kB] 2026-03-09T20:16:47.669 INFO:teuthology.orchestra.run.vm04.stdout:Get:92 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-socket amd64 3.0~rc1+git+ac3201d-6 [78.9 kB] 2026-03-09T20:16:47.706 INFO:teuthology.orchestra.run.vm04.stdout:Get:93 https://archive.ubuntu.com/ubuntu jammy/universe amd64 lua-sec amd64 1.0.2-1 [37.6 kB] 2026-03-09T20:16:47.742 INFO:teuthology.orchestra.run.vm04.stdout:Get:94 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 nvme-cli amd64 1.16-3ubuntu0.3 [474 kB] 2026-03-09T20:16:47.744 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T20:16:47.774 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T20:16:47.776 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T20:16:47.778 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T20:16:47.783 INFO:teuthology.orchestra.run.vm04.stdout:Get:95 https://archive.ubuntu.com/ubuntu jammy/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB] 2026-03-09T20:16:47.795 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T20:16:47.800 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T20:16:47.801 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T20:16:47.814 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T20:16:47.819 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T20:16:47.819 INFO:teuthology.orchestra.run.vm04.stdout:Get:96 https://archive.ubuntu.com/ubuntu jammy-updates/universe amd64 python-asyncssh-doc all 2.5.0-1ubuntu0.1 [309 kB] 2026-03-09T20:16:47.819 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T20:16:47.837 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T20:16:47.842 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T20:16:47.845 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:16:47.858 INFO:teuthology.orchestra.run.vm04.stdout:Get:97 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-iniconfig all 1.1.1-2 [6024 B] 2026-03-09T20:16:47.883 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T20:16:47.888 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T20:16:47.888 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:16:47.894 INFO:teuthology.orchestra.run.vm04.stdout:Get:98 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pastescript all 2.0.2-4 [54.6 kB] 2026-03-09T20:16:47.905 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T20:16:47.910 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T20:16:47.911 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:16:47.931 INFO:teuthology.orchestra.run.vm04.stdout:Get:99 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pluggy all 0.13.0-7.1 [19.0 kB] 2026-03-09T20:16:47.933 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T20:16:47.938 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T20:16:47.939 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T20:16:47.962 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:47.964 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T20:16:47.967 INFO:teuthology.orchestra.run.vm04.stdout:Get:100 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-psutil amd64 5.9.0-1build1 [158 kB] 2026-03-09T20:16:48.005 INFO:teuthology.orchestra.run.vm04.stdout:Get:101 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-py all 1.10.0-1 [71.9 kB] 2026-03-09T20:16:48.035 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:48.037 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T20:16:48.041 INFO:teuthology.orchestra.run.vm04.stdout:Get:102 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 python3-pygments all 2.11.2+dfsg-2ubuntu0.1 [750 kB] 2026-03-09T20:16:48.101 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libnbd0. 2026-03-09T20:16:48.107 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T20:16:48.107 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T20:16:48.109 INFO:teuthology.orchestra.run.vm04.stdout:Get:103 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T20:16:48.122 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T20:16:48.128 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:48.129 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:48.157 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rados. 2026-03-09T20:16:48.159 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:48.160 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:48.161 INFO:teuthology.orchestra.run.vm04.stdout:Get:104 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-pyinotify all 0.9.6-1.3 [24.8 kB] 2026-03-09T20:16:48.178 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T20:16:48.183 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:48.183 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:48.198 INFO:teuthology.orchestra.run.vm04.stdout:Get:105 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-toml all 0.10.2-1 [16.5 kB] 2026-03-09T20:16:48.198 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T20:16:48.205 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:48.206 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:48.232 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T20:16:48.234 INFO:teuthology.orchestra.run.vm04.stdout:Get:106 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-pytest all 6.2.5-1ubuntu2 [214 kB] 2026-03-09T20:16:48.237 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:48.238 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:48.256 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T20:16:48.261 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T20:16:48.262 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T20:16:48.272 INFO:teuthology.orchestra.run.vm04.stdout:Get:107 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-simplejson amd64 3.17.6-1build1 [54.7 kB] 2026-03-09T20:16:48.278 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T20:16:48.282 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T20:16:48.283 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T20:16:48.302 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T20:16:48.307 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:48.308 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:48.309 INFO:teuthology.orchestra.run.vm04.stdout:Get:108 https://archive.ubuntu.com/ubuntu jammy/universe amd64 qttranslations5-l10n all 5.15.3-1 [1983 kB] 2026-03-09T20:16:48.327 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T20:16:48.332 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T20:16:48.332 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T20:16:48.352 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T20:16:48.358 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T20:16:48.359 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T20:16:48.368 INFO:teuthology.orchestra.run.vm04.stdout:Get:109 https://archive.ubuntu.com/ubuntu jammy-updates/main amd64 smartmontools amd64 7.2-1ubuntu0.1 [583 kB] 2026-03-09T20:16:48.375 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T20:16:48.380 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T20:16:48.382 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T20:16:48.413 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua5.1. 2026-03-09T20:16:48.418 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T20:16:48.419 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T20:16:48.438 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-any. 2026-03-09T20:16:48.442 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T20:16:48.443 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T20:16:48.456 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package zip. 2026-03-09T20:16:48.461 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T20:16:48.462 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T20:16:48.478 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package unzip. 2026-03-09T20:16:48.483 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T20:16:48.484 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T20:16:48.502 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package luarocks. 2026-03-09T20:16:48.507 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T20:16:48.508 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T20:16:48.681 INFO:teuthology.orchestra.run.vm04.stdout:Fetched 178 MB in 9s (18.7 MB/s) 2026-03-09T20:16:48.922 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package librgw2. 2026-03-09T20:16:48.924 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T20:16:48.927 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:48.928 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:48.953 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T20:16:48.955 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T20:16:48.957 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T20:16:48.977 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T20:16:48.982 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T20:16:48.983 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T20:16:48.997 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T20:16:49.002 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T20:16:49.003 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T20:16:49.049 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T20:16:49.052 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:49.053 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:49.053 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T20:16:49.058 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T20:16:49.062 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:16:49.072 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T20:16:49.077 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T20:16:49.077 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T20:16:49.099 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T20:16:49.100 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T20:16:49.104 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T20:16:49.105 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:16:49.106 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:49.106 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:49.123 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T20:16:49.127 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-common. 2026-03-09T20:16:49.128 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T20:16:49.129 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:16:49.132 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:49.133 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:49.151 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T20:16:49.156 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T20:16:49.157 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T20:16:49.182 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:49.184 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T20:16:49.259 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:49.261 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T20:16:49.326 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libnbd0. 2026-03-09T20:16:49.332 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T20:16:49.332 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T20:16:49.346 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T20:16:49.352 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:49.353 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:49.378 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rados. 2026-03-09T20:16:49.383 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:49.384 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:49.530 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T20:16:49.536 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:49.537 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:49.537 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-base. 2026-03-09T20:16:49.542 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:49.546 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:49.554 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T20:16:49.559 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:49.560 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:49.580 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T20:16:49.585 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:49.602 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:49.631 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T20:16:49.631 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T20:16:49.636 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T20:16:49.636 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T20:16:49.637 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T20:16:49.637 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T20:16:49.655 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T20:16:49.656 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T20:16:49.661 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T20:16:49.661 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T20:16:49.662 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T20:16:49.662 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T20:16:49.678 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T20:16:49.681 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T20:16:49.682 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:49.683 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:49.686 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T20:16:49.687 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T20:16:49.700 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T20:16:49.703 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T20:16:49.705 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T20:16:49.706 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T20:16:49.708 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T20:16:49.708 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T20:16:49.722 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T20:16:49.728 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T20:16:49.729 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T20:16:49.729 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T20:16:49.734 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T20:16:49.734 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T20:16:49.746 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T20:16:49.752 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T20:16:49.753 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T20:16:49.754 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T20:16:49.758 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T20:16:49.758 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T20:16:49.770 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-portend. 2026-03-09T20:16:49.776 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T20:16:49.776 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T20:16:49.780 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua5.1. 2026-03-09T20:16:49.784 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T20:16:49.784 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T20:16:49.792 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T20:16:49.798 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T20:16:49.799 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T20:16:49.803 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua-any. 2026-03-09T20:16:49.807 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T20:16:49.808 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T20:16:49.815 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T20:16:49.819 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package zip. 2026-03-09T20:16:49.821 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T20:16:49.822 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T20:16:49.823 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T20:16:49.823 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T20:16:49.842 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package unzip. 2026-03-09T20:16:49.847 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T20:16:49.848 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T20:16:49.853 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T20:16:49.859 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T20:16:49.860 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T20:16:49.869 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package luarocks. 2026-03-09T20:16:49.873 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T20:16:49.874 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T20:16:49.878 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T20:16:49.884 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T20:16:49.885 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T20:16:49.904 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-mako. 2026-03-09T20:16:49.911 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T20:16:49.911 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T20:16:49.922 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package librgw2. 2026-03-09T20:16:49.927 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:49.928 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:49.932 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T20:16:49.938 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T20:16:49.939 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T20:16:49.955 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T20:16:49.960 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T20:16:49.961 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T20:16:49.974 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-webob. 2026-03-09T20:16:49.979 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T20:16:49.979 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T20:16:50.027 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T20:16:50.033 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T20:16:50.036 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T20:16:50.038 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T20:16:50.042 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:50.043 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:50.057 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T20:16:50.060 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T20:16:50.064 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T20:16:50.065 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T20:16:50.065 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T20:16:50.066 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T20:16:50.082 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-paste. 2026-03-09T20:16:50.083 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T20:16:50.088 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T20:16:50.088 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:50.089 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T20:16:50.089 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:50.113 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-common. 2026-03-09T20:16:50.119 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:50.119 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:50.122 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T20:16:50.128 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T20:16:50.129 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T20:16:50.142 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T20:16:50.147 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T20:16:50.148 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T20:16:50.163 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T20:16:50.167 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T20:16:50.167 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T20:16:50.199 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T20:16:50.203 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T20:16:50.204 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T20:16:50.231 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T20:16:50.236 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T20:16:50.236 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T20:16:50.258 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T20:16:50.262 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:50.263 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:50.276 INFO:teuthology.orchestra.run.vm08.stdout:Get:106 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 ceph-volume all 19.2.3-678-ge911bdeb-1jammy [135 kB] 2026-03-09T20:16:50.276 INFO:teuthology.orchestra.run.vm08.stdout:Get:107 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 libcephfs-dev amd64 19.2.3-678-ge911bdeb-1jammy [41.0 kB] 2026-03-09T20:16:50.277 INFO:teuthology.orchestra.run.vm08.stdout:Get:108 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 radosgw amd64 19.2.3-678-ge911bdeb-1jammy [13.7 MB] 2026-03-09T20:16:50.297 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T20:16:50.302 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:50.303 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:50.320 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T20:16:50.326 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:50.327 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:50.484 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T20:16:50.491 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:50.491 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:50.500 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-base. 2026-03-09T20:16:50.505 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:50.509 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:50.625 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T20:16:50.629 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T20:16:50.631 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T20:16:50.631 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T20:16:50.632 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T20:16:50.632 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T20:16:50.645 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T20:16:50.649 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T20:16:50.650 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T20:16:50.650 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T20:16:50.655 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:50.656 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:50.667 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T20:16:50.670 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T20:16:50.671 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T20:16:50.684 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T20:16:50.688 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T20:16:50.689 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T20:16:50.702 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T20:16:50.705 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T20:16:50.707 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T20:16:50.720 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T20:16:50.723 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T20:16:50.724 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T20:16:50.738 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-portend. 2026-03-09T20:16:50.744 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T20:16:50.744 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T20:16:50.761 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T20:16:50.768 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T20:16:50.769 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T20:16:50.785 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T20:16:50.791 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T20:16:50.792 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T20:16:50.820 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T20:16:50.826 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T20:16:50.826 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T20:16:50.843 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T20:16:50.848 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T20:16:50.849 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T20:16:50.958 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-mako. 2026-03-09T20:16:50.963 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T20:16:50.963 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph. 2026-03-09T20:16:50.963 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T20:16:50.968 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:50.969 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:50.981 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T20:16:50.984 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T20:16:50.986 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T20:16:50.987 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T20:16:50.991 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:50.991 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:51.000 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T20:16:51.005 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T20:16:51.006 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T20:16:51.022 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-webob. 2026-03-09T20:16:51.026 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T20:16:51.028 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T20:16:51.029 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T20:16:51.032 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:51.032 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:51.045 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T20:16:51.050 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T20:16:51.052 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T20:16:51.073 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T20:16:51.078 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T20:16:51.079 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package cephadm. 2026-03-09T20:16:51.079 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T20:16:51.084 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:51.085 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:51.092 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-paste. 2026-03-09T20:16:51.097 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T20:16:51.098 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T20:16:51.103 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T20:16:51.109 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T20:16:51.110 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T20:16:51.132 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T20:16:51.134 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T20:16:51.138 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T20:16:51.139 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T20:16:51.140 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:51.140 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:51.154 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T20:16:51.159 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T20:16:51.160 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T20:16:51.162 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T20:16:51.167 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T20:16:51.167 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T20:16:51.176 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T20:16:51.180 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-routes. 2026-03-09T20:16:51.180 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T20:16:51.181 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T20:16:51.184 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T20:16:51.184 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T20:16:51.197 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T20:16:51.202 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T20:16:51.203 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T20:16:51.208 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T20:16:51.213 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:51.214 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:51.234 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T20:16:51.241 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T20:16:51.242 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T20:16:51.266 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T20:16:51.272 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:51.273 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:51.314 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T20:16:51.315 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:51.315 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:51.334 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T20:16:51.336 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:51.337 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:51.365 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T20:16:51.370 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:51.371 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:51.457 INFO:teuthology.orchestra.run.vm08.stdout:Get:109 https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default jammy/main amd64 rbd-fuse amd64 19.2.3-678-ge911bdeb-1jammy [92.2 kB] 2026-03-09T20:16:51.584 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T20:16:51.589 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T20:16:51.590 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T20:16:51.603 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T20:16:51.609 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T20:16:51.609 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T20:16:51.609 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T20:16:51.616 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:51.616 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:51.670 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T20:16:51.676 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T20:16:51.677 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T20:16:51.711 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T20:16:51.717 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T20:16:51.718 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T20:16:51.735 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T20:16:51.741 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T20:16:51.742 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T20:16:51.764 INFO:teuthology.orchestra.run.vm08.stdout:Fetched 178 MB in 13s (14.1 MB/s) 2026-03-09T20:16:51.985 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package liblttng-ust1:amd64. 2026-03-09T20:16:51.995 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph. 2026-03-09T20:16:51.995 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T20:16:52.000 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:52.001 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.001 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:52.002 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.022 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 111717 files and directories currently installed.) 2026-03-09T20:16:52.022 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T20:16:52.024 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../000-liblttng-ust1_2.13.1-1ubuntu1_amd64.deb ... 2026-03-09T20:16:52.026 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T20:16:52.027 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:52.028 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.048 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libdouble-conversion3:amd64. 2026-03-09T20:16:52.054 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../001-libdouble-conversion3_3.1.7-4_amd64.deb ... 2026-03-09T20:16:52.055 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T20:16:52.057 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T20:16:52.062 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:52.068 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.078 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libpcre2-16-0:amd64. 2026-03-09T20:16:52.088 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../002-libpcre2-16-0_10.39-3ubuntu0.1_amd64.deb ... 2026-03-09T20:16:52.089 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T20:16:52.114 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package cephadm. 2026-03-09T20:16:52.118 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:52.119 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.119 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libqt5core5a:amd64. 2026-03-09T20:16:52.125 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../003-libqt5core5a_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T20:16:52.130 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:16:52.136 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T20:16:52.140 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T20:16:52.141 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T20:16:52.271 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T20:16:52.271 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libqt5dbus5:amd64. 2026-03-09T20:16:52.277 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../004-libqt5dbus5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T20:16:52.277 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:52.295 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:16:52.296 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.311 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T20:16:52.314 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libqt5network5:amd64. 2026-03-09T20:16:52.316 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T20:16:52.317 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T20:16:52.318 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../005-libqt5network5_5.15.3+dfsg-2ubuntu0.2_amd64.deb ... 2026-03-09T20:16:52.319 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:16:52.322 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T20:16:52.328 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T20:16:52.333 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T20:16:52.334 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T20:16:52.340 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T20:16:52.341 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T20:16:52.347 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libthrift-0.16.0:amd64. 2026-03-09T20:16:52.349 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-routes. 2026-03-09T20:16:52.351 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../006-libthrift-0.16.0_0.16.0-2_amd64.deb ... 2026-03-09T20:16:52.352 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T20:16:52.355 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T20:16:52.356 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T20:16:52.361 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T20:16:52.367 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T20:16:52.368 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T20:16:52.375 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../007-librbd1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:52.378 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking librbd1 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T20:16:52.378 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T20:16:52.385 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:52.385 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.388 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T20:16:52.394 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T20:16:52.395 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T20:16:52.415 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T20:16:52.421 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T20:16:52.433 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T20:16:52.455 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../008-librados2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:52.457 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T20:16:52.457 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking librados2 (19.2.3-678-ge911bdeb-1jammy) over (17.2.9-0ubuntu0.22.04.2) ... 2026-03-09T20:16:52.463 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T20:16:52.477 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T20:16:52.533 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libnbd0. 2026-03-09T20:16:52.539 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../009-libnbd0_1.10.5-1_amd64.deb ... 2026-03-09T20:16:52.540 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libnbd0 (1.10.5-1) ... 2026-03-09T20:16:52.558 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libcephfs2. 2026-03-09T20:16:52.564 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../010-libcephfs2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:52.565 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.706 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-rados. 2026-03-09T20:16:52.711 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../011-python3-rados_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:52.732 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.733 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T20:16:52.740 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:52.742 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.755 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-ceph-argparse. 2026-03-09T20:16:52.759 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T20:16:52.760 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T20:16:52.760 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../012-python3-ceph-argparse_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:52.761 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.765 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T20:16:52.766 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T20:16:52.766 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T20:16:52.767 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T20:16:52.775 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-cephfs. 2026-03-09T20:16:52.780 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../013-python3-cephfs_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:52.781 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.786 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T20:16:52.791 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T20:16:52.792 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T20:16:52.798 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-ceph-common. 2026-03-09T20:16:52.804 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../014-python3-ceph-common_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:52.815 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.823 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package jq. 2026-03-09T20:16:52.829 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T20:16:52.831 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T20:16:52.832 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T20:16:52.836 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T20:16:52.837 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-wcwidth. 2026-03-09T20:16:52.837 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T20:16:52.843 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../015-python3-wcwidth_0.2.5+dfsg1-1_all.deb ... 2026-03-09T20:16:52.844 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T20:16:52.851 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package socat. 2026-03-09T20:16:52.857 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T20:16:52.859 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T20:16:52.864 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-prettytable. 2026-03-09T20:16:52.870 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../016-python3-prettytable_2.5.0-2_all.deb ... 2026-03-09T20:16:52.872 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-prettytable (2.5.0-2) ... 2026-03-09T20:16:52.874 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T20:16:52.880 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T20:16:52.881 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T20:16:52.884 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T20:16:52.889 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-rbd. 2026-03-09T20:16:52.890 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T20:16:52.891 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T20:16:52.894 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../017-python3-rbd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:52.895 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.897 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T20:16:52.904 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T20:16:52.905 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T20:16:52.920 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package librdkafka1:amd64. 2026-03-09T20:16:52.926 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../018-librdkafka1_1.8.0-1build1_amd64.deb ... 2026-03-09T20:16:52.927 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T20:16:52.940 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-test. 2026-03-09T20:16:52.946 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:52.947 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:52.947 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libreadline-dev:amd64. 2026-03-09T20:16:52.952 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../019-libreadline-dev_8.1.2-1_amd64.deb ... 2026-03-09T20:16:52.953 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T20:16:52.971 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package liblua5.3-dev:amd64. 2026-03-09T20:16:52.977 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../020-liblua5.3-dev_5.3.6-1build1_amd64.deb ... 2026-03-09T20:16:52.978 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T20:16:53.013 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package lua5.1. 2026-03-09T20:16:53.018 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../021-lua5.1_5.1.5-8.1build4_amd64.deb ... 2026-03-09T20:16:53.020 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking lua5.1 (5.1.5-8.1build4) ... 2026-03-09T20:16:53.039 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T20:16:53.041 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package lua-any. 2026-03-09T20:16:53.047 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:53.048 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../022-lua-any_27ubuntu1_all.deb ... 2026-03-09T20:16:53.048 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:53.049 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking lua-any (27ubuntu1) ... 2026-03-09T20:16:53.063 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package zip. 2026-03-09T20:16:53.069 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../023-zip_3.0-12build2_amd64.deb ... 2026-03-09T20:16:53.070 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking zip (3.0-12build2) ... 2026-03-09T20:16:53.089 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package unzip. 2026-03-09T20:16:53.094 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../024-unzip_6.0-26ubuntu3.2_amd64.deb ... 2026-03-09T20:16:53.096 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking unzip (6.0-26ubuntu3.2) ... 2026-03-09T20:16:53.116 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package luarocks. 2026-03-09T20:16:53.121 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../025-luarocks_3.8.0+dfsg1-1_all.deb ... 2026-03-09T20:16:53.123 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking luarocks (3.8.0+dfsg1-1) ... 2026-03-09T20:16:53.176 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package librgw2. 2026-03-09T20:16:53.182 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../026-librgw2_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:53.183 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:53.363 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-rgw. 2026-03-09T20:16:53.368 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T20:16:53.369 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../027-python3-rgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:53.370 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:53.374 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T20:16:53.374 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T20:16:53.388 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package liboath0:amd64. 2026-03-09T20:16:53.388 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T20:16:53.393 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../028-liboath0_2.6.7-3ubuntu0.1_amd64.deb ... 2026-03-09T20:16:53.394 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T20:16:53.394 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T20:16:53.395 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T20:16:53.408 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libradosstriper1. 2026-03-09T20:16:53.412 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T20:16:53.414 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../029-libradosstriper1_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:53.415 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:53.417 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T20:16:53.418 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T20:16:53.437 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-common. 2026-03-09T20:16:53.438 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T20:16:53.442 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T20:16:53.443 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../030-ceph-common_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:53.443 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T20:16:53.444 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:53.459 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T20:16:53.463 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T20:16:53.464 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T20:16:53.484 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T20:16:53.489 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T20:16:53.595 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T20:16:53.949 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T20:16:53.954 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-base. 2026-03-09T20:16:53.955 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:53.956 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T20:16:53.956 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:53.960 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../031-ceph-base_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:53.962 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:53.963 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:53.964 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:53.971 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T20:16:53.977 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T20:16:53.978 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T20:16:53.990 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T20:16:53.996 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:53.998 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:54.000 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T20:16:54.007 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T20:16:54.008 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T20:16:54.013 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T20:16:54.019 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T20:16:54.020 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T20:16:54.027 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package jq. 2026-03-09T20:16:54.035 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T20:16:54.058 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T20:16:54.071 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T20:16:54.076 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package socat. 2026-03-09T20:16:54.076 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-jaraco.functools. 2026-03-09T20:16:54.077 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T20:16:54.078 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T20:16:54.083 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../032-python3-jaraco.functools_3.4.0-2_all.deb ... 2026-03-09T20:16:54.083 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T20:16:54.084 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-jaraco.functools (3.4.0-2) ... 2026-03-09T20:16:54.085 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T20:16:54.097 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T20:16:54.102 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-cheroot. 2026-03-09T20:16:54.103 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T20:16:54.104 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T20:16:54.108 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../033-python3-cheroot_8.5.2+ds1-1ubuntu3.1_all.deb ... 2026-03-09T20:16:54.109 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T20:16:54.111 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T20:16:54.118 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T20:16:54.120 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T20:16:54.134 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-jaraco.classes. 2026-03-09T20:16:54.141 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../034-python3-jaraco.classes_3.2.1-3_all.deb ... 2026-03-09T20:16:54.142 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-jaraco.classes (3.2.1-3) ... 2026-03-09T20:16:54.144 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package pkg-config. 2026-03-09T20:16:54.149 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T20:16:54.150 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T20:16:54.169 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-test. 2026-03-09T20:16:54.170 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-jaraco.text. 2026-03-09T20:16:54.172 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T20:16:54.175 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:54.176 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../035-python3-jaraco.text_3.6.0-2_all.deb ... 2026-03-09T20:16:54.176 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:54.177 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-jaraco.text (3.6.0-2) ... 2026-03-09T20:16:54.178 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T20:16:54.179 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T20:16:54.193 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-jaraco.collections. 2026-03-09T20:16:54.199 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../036-python3-jaraco.collections_3.4.0-2_all.deb ... 2026-03-09T20:16:54.200 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-jaraco.collections (3.4.0-2) ... 2026-03-09T20:16:54.224 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-tempora. 2026-03-09T20:16:54.226 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T20:16:54.230 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../037-python3-tempora_4.1.2-1_all.deb ... 2026-03-09T20:16:54.231 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-tempora (4.1.2-1) ... 2026-03-09T20:16:54.233 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T20:16:54.234 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T20:16:54.251 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T20:16:54.252 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-portend. 2026-03-09T20:16:54.257 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T20:16:54.258 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T20:16:54.258 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../038-python3-portend_3.0.0-1_all.deb ... 2026-03-09T20:16:54.259 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-portend (3.0.0-1) ... 2026-03-09T20:16:54.278 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-zc.lockfile. 2026-03-09T20:16:54.280 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T20:16:54.286 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../039-python3-zc.lockfile_2.0-1_all.deb ... 2026-03-09T20:16:54.287 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T20:16:54.287 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-zc.lockfile (2.0-1) ... 2026-03-09T20:16:54.288 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T20:16:54.307 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-cherrypy3. 2026-03-09T20:16:54.307 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T20:16:54.313 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T20:16:54.313 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../040-python3-cherrypy3_18.6.1-4_all.deb ... 2026-03-09T20:16:54.314 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T20:16:54.314 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-cherrypy3 (18.6.1-4) ... 2026-03-09T20:16:54.337 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-py. 2026-03-09T20:16:54.343 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T20:16:54.344 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T20:16:54.345 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-natsort. 2026-03-09T20:16:54.351 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../041-python3-natsort_8.0.2-1_all.deb ... 2026-03-09T20:16:54.352 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-natsort (8.0.2-1) ... 2026-03-09T20:16:54.370 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T20:16:54.372 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-logutils. 2026-03-09T20:16:54.376 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T20:16:54.377 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T20:16:54.378 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../042-python3-logutils_0.3.3-8_all.deb ... 2026-03-09T20:16:54.379 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-logutils (0.3.3-8) ... 2026-03-09T20:16:54.395 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-mako. 2026-03-09T20:16:54.401 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../043-python3-mako_1.1.3+ds1-2ubuntu0.1_all.deb ... 2026-03-09T20:16:54.402 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T20:16:54.428 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-simplegeneric. 2026-03-09T20:16:54.434 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../044-python3-simplegeneric_0.8.1-3_all.deb ... 2026-03-09T20:16:54.435 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-simplegeneric (0.8.1-3) ... 2026-03-09T20:16:54.442 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T20:16:54.448 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T20:16:54.449 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T20:16:54.450 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-singledispatch. 2026-03-09T20:16:54.456 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../045-python3-singledispatch_3.4.0.3-3_all.deb ... 2026-03-09T20:16:54.457 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-singledispatch (3.4.0.3-3) ... 2026-03-09T20:16:54.467 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-toml. 2026-03-09T20:16:54.471 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-webob. 2026-03-09T20:16:54.472 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T20:16:54.473 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T20:16:54.477 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../046-python3-webob_1%3a1.8.6-1.1ubuntu0.1_all.deb ... 2026-03-09T20:16:54.477 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T20:16:54.489 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T20:16:54.492 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T20:16:54.494 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T20:16:54.496 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-waitress. 2026-03-09T20:16:54.502 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../047-python3-waitress_1.4.4-1.1ubuntu1.1_all.deb ... 2026-03-09T20:16:54.504 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T20:16:54.521 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T20:16:54.521 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-tempita. 2026-03-09T20:16:54.524 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T20:16:54.525 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T20:16:54.527 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../048-python3-tempita_0.5.2-6ubuntu1_all.deb ... 2026-03-09T20:16:54.528 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T20:16:54.542 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T20:16:54.544 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-paste. 2026-03-09T20:16:54.547 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T20:16:54.548 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T20:16:54.549 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../049-python3-paste_3.5.0+dfsg1-1_all.deb ... 2026-03-09T20:16:54.550 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T20:16:54.584 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python-pastedeploy-tpl. 2026-03-09T20:16:54.590 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../050-python-pastedeploy-tpl_2.1.1-1_all.deb ... 2026-03-09T20:16:54.591 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T20:16:54.607 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-pastedeploy. 2026-03-09T20:16:54.612 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../051-python3-pastedeploy_2.1.1-1_all.deb ... 2026-03-09T20:16:54.631 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-pastedeploy (2.1.1-1) ... 2026-03-09T20:16:54.648 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-webtest. 2026-03-09T20:16:54.655 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../052-python3-webtest_2.0.35-1_all.deb ... 2026-03-09T20:16:54.656 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-webtest (2.0.35-1) ... 2026-03-09T20:16:54.661 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package radosgw. 2026-03-09T20:16:54.667 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:54.668 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:54.674 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-pecan. 2026-03-09T20:16:54.682 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../053-python3-pecan_1.3.3-4ubuntu2_all.deb ... 2026-03-09T20:16:54.682 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T20:16:55.048 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-werkzeug. 2026-03-09T20:16:55.053 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../054-python3-werkzeug_2.0.2+dfsg1-1ubuntu0.22.04.3_all.deb ... 2026-03-09T20:16:55.055 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T20:16:55.058 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T20:16:55.063 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:55.065 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.065 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T20:16:55.071 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:55.072 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.076 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mgr-modules-core. 2026-03-09T20:16:55.080 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package smartmontools. 2026-03-09T20:16:55.082 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../055-ceph-mgr-modules-core_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:55.082 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.085 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T20:16:55.093 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T20:16:55.104 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T20:16:55.109 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:55.110 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.122 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libsqlite3-mod-ceph. 2026-03-09T20:16:55.127 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../056-libsqlite3-mod-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:55.128 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.129 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T20:16:55.134 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T20:16:55.135 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T20:16:55.137 INFO:teuthology.orchestra.run.vm03.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T20:16:55.148 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mgr. 2026-03-09T20:16:55.154 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../057-ceph-mgr_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:55.155 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.160 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T20:16:55.165 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T20:16:55.166 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T20:16:55.185 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T20:16:55.188 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mon. 2026-03-09T20:16:55.191 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T20:16:55.191 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T20:16:55.194 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../058-ceph-mon_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:55.195 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.229 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package pkg-config. 2026-03-09T20:16:55.235 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T20:16:55.236 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T20:16:55.252 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T20:16:55.258 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T20:16:55.281 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T20:16:55.296 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libfuse2:amd64. 2026-03-09T20:16:55.303 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../059-libfuse2_2.9.9-5ubuntu3_amd64.deb ... 2026-03-09T20:16:55.311 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T20:16:55.326 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T20:16:55.330 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-osd. 2026-03-09T20:16:55.332 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T20:16:55.332 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T20:16:55.337 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../060-ceph-osd_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:55.337 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.348 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T20:16:55.353 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T20:16:55.354 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T20:16:55.375 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T20:16:55.381 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T20:16:55.382 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T20:16:55.398 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T20:16:55.404 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T20:16:55.405 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T20:16:55.411 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T20:16:55.411 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T20:16:55.427 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-py. 2026-03-09T20:16:55.433 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T20:16:55.434 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T20:16:55.460 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T20:16:55.465 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T20:16:55.466 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T20:16:55.616 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph. 2026-03-09T20:16:55.618 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T20:16:55.619 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T20:16:55.620 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T20:16:55.621 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../061-ceph_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:55.622 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.635 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-fuse. 2026-03-09T20:16:55.638 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-toml. 2026-03-09T20:16:55.638 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T20:16:55.639 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T20:16:55.640 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../062-ceph-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:55.640 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.657 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T20:16:55.657 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T20:16:55.658 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T20:16:55.672 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mds. 2026-03-09T20:16:55.678 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../063-ceph-mds_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:55.678 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.686 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T20:16:55.692 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T20:16:55.693 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T20:16:55.724 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T20:16:55.724 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package cephadm. 2026-03-09T20:16:55.730 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../064-cephadm_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:55.730 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T20:16:55.730 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.731 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T20:16:55.751 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-asyncssh. 2026-03-09T20:16:55.757 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../065-python3-asyncssh_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T20:16:55.758 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T20:16:55.785 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mgr-cephadm. 2026-03-09T20:16:55.790 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../066-ceph-mgr-cephadm_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:55.791 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.816 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T20:16:55.831 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-repoze.lru. 2026-03-09T20:16:55.837 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../067-python3-repoze.lru_0.7-2_all.deb ... 2026-03-09T20:16:55.837 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-repoze.lru (0.7-2) ... 2026-03-09T20:16:55.840 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package radosgw. 2026-03-09T20:16:55.846 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:55.847 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.855 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-routes. 2026-03-09T20:16:55.860 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../068-python3-routes_2.5.1-1ubuntu1_all.deb ... 2026-03-09T20:16:55.860 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T20:16:55.880 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T20:16:55.883 INFO:teuthology.orchestra.run.vm03.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T20:16:55.886 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mgr-dashboard. 2026-03-09T20:16:55.891 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../069-ceph-mgr-dashboard_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:55.892 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:55.948 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T20:16:56.042 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T20:16:56.049 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:56.050 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:56.069 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package smartmontools. 2026-03-09T20:16:56.075 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T20:16:56.083 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T20:16:56.141 INFO:teuthology.orchestra.run.vm04.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T20:16:56.217 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T20:16:56.257 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-sklearn-lib:amd64. 2026-03-09T20:16:56.263 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../070-python3-sklearn-lib_0.23.2-5ubuntu6_amd64.deb ... 2026-03-09T20:16:56.264 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T20:16:56.325 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-joblib. 2026-03-09T20:16:56.332 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../071-python3-joblib_0.17.0-4ubuntu1_all.deb ... 2026-03-09T20:16:56.333 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T20:16:56.370 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-threadpoolctl. 2026-03-09T20:16:56.376 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../072-python3-threadpoolctl_3.1.0-1_all.deb ... 2026-03-09T20:16:56.377 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-threadpoolctl (3.1.0-1) ... 2026-03-09T20:16:56.396 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-sklearn. 2026-03-09T20:16:56.401 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../073-python3-sklearn_0.23.2-5ubuntu6_all.deb ... 2026-03-09T20:16:56.402 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T20:16:56.463 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T20:16:56.463 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T20:16:56.527 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mgr-diskprediction-local. 2026-03-09T20:16:56.534 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../074-ceph-mgr-diskprediction-local_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:56.535 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:56.565 INFO:teuthology.orchestra.run.vm03.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T20:16:56.572 INFO:teuthology.orchestra.run.vm03.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T20:16:56.574 INFO:teuthology.orchestra.run.vm03.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:56.615 INFO:teuthology.orchestra.run.vm03.stdout:Adding system user cephadm....done 2026-03-09T20:16:56.623 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T20:16:56.767 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T20:16:56.796 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T20:16:56.802 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-cachetools. 2026-03-09T20:16:56.808 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../075-python3-cachetools_5.0.0-1_all.deb ... 2026-03-09T20:16:56.808 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-cachetools (5.0.0-1) ... 2026-03-09T20:16:56.823 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-rsa. 2026-03-09T20:16:56.829 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../076-python3-rsa_4.8-1_all.deb ... 2026-03-09T20:16:56.829 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-rsa (4.8-1) ... 2026-03-09T20:16:56.849 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-google-auth. 2026-03-09T20:16:56.852 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T20:16:56.855 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T20:16:56.855 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../077-python3-google-auth_1.5.1-3_all.deb ... 2026-03-09T20:16:56.856 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-google-auth (1.5.1-3) ... 2026-03-09T20:16:56.862 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T20:16:56.864 INFO:teuthology.orchestra.run.vm04.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T20:16:56.876 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-requests-oauthlib. 2026-03-09T20:16:56.882 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../078-python3-requests-oauthlib_1.3.0+ds-0.1_all.deb ... 2026-03-09T20:16:56.883 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T20:16:56.901 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-websocket. 2026-03-09T20:16:56.907 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../079-python3-websocket_1.2.3-1_all.deb ... 2026-03-09T20:16:56.908 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-websocket (1.2.3-1) ... 2026-03-09T20:16:56.923 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T20:16:56.928 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-kubernetes. 2026-03-09T20:16:56.934 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../080-python3-kubernetes_12.0.1-1ubuntu1_all.deb ... 2026-03-09T20:16:56.935 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T20:16:56.947 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T20:16:56.994 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T20:16:56.997 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T20:16:57.091 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T20:16:57.118 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-mgr-k8sevents. 2026-03-09T20:16:57.124 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../081-ceph-mgr-k8sevents_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:57.125 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:57.145 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libonig5:amd64. 2026-03-09T20:16:57.149 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../082-libonig5_6.9.7.1-2build1_amd64.deb ... 2026-03-09T20:16:57.149 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T20:16:57.169 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libjq1:amd64. 2026-03-09T20:16:57.174 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../083-libjq1_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T20:16:57.175 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T20:16:57.179 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T20:16:57.192 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package jq. 2026-03-09T20:16:57.197 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../084-jq_1.6-2.1ubuntu3.1_amd64.deb ... 2026-03-09T20:16:57.198 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking jq (1.6-2.1ubuntu3.1) ... 2026-03-09T20:16:57.214 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package socat. 2026-03-09T20:16:57.219 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../085-socat_1.7.4.1-3ubuntu4_amd64.deb ... 2026-03-09T20:16:57.220 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking socat (1.7.4.1-3ubuntu4) ... 2026-03-09T20:16:57.221 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T20:16:57.241 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package xmlstarlet. 2026-03-09T20:16:57.247 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../086-xmlstarlet_1.6.1-2.1_amd64.deb ... 2026-03-09T20:16:57.247 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking xmlstarlet (1.6.1-2.1) ... 2026-03-09T20:16:57.291 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-test. 2026-03-09T20:16:57.293 INFO:teuthology.orchestra.run.vm03.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T20:16:57.296 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../087-ceph-test_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:57.297 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:57.302 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T20:16:57.379 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T20:16:57.454 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:57.537 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T20:16:57.539 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T20:16:57.542 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T20:16:57.545 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T20:16:57.547 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T20:16:57.549 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T20:16:57.554 INFO:teuthology.orchestra.run.vm03.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T20:16:57.556 INFO:teuthology.orchestra.run.vm03.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T20:16:57.558 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T20:16:57.561 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T20:16:57.580 INFO:teuthology.orchestra.run.vm04.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T20:16:57.586 INFO:teuthology.orchestra.run.vm04.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T20:16:57.674 INFO:teuthology.orchestra.run.vm04.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:57.727 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T20:16:57.765 INFO:teuthology.orchestra.run.vm04.stdout:Adding system user cephadm....done 2026-03-09T20:16:57.775 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T20:16:57.801 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T20:16:57.954 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T20:16:57.955 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T20:16:57.972 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package ceph-volume. 2026-03-09T20:16:57.977 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../088-ceph-volume_19.2.3-678-ge911bdeb-1jammy_all.deb ... 2026-03-09T20:16:57.978 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:58.006 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package libcephfs-dev. 2026-03-09T20:16:58.011 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../089-libcephfs-dev_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:58.012 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:58.027 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package lua-socket:amd64. 2026-03-09T20:16:58.027 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T20:16:58.029 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T20:16:58.031 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../090-lua-socket_3.0~rc1+git+ac3201d-6_amd64.deb ... 2026-03-09T20:16:58.032 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T20:16:58.041 INFO:teuthology.orchestra.run.vm03.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T20:16:58.044 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T20:16:58.055 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package lua-sec:amd64. 2026-03-09T20:16:58.059 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../091-lua-sec_1.0.2-1_amd64.deb ... 2026-03-09T20:16:58.060 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking lua-sec:amd64 (1.0.2-1) ... 2026-03-09T20:16:58.079 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package nvme-cli. 2026-03-09T20:16:58.083 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../092-nvme-cli_1.16-3ubuntu0.3_amd64.deb ... 2026-03-09T20:16:58.083 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T20:16:58.095 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T20:16:58.164 INFO:teuthology.orchestra.run.vm04.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T20:16:58.166 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T20:16:58.166 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package pkg-config. 2026-03-09T20:16:58.172 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../093-pkg-config_0.29.2-1ubuntu3_amd64.deb ... 2026-03-09T20:16:58.173 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T20:16:58.187 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python-asyncssh-doc. 2026-03-09T20:16:58.193 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../094-python-asyncssh-doc_2.5.0-1ubuntu0.1_all.deb ... 2026-03-09T20:16:58.193 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T20:16:58.238 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-iniconfig. 2026-03-09T20:16:58.244 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../095-python3-iniconfig_1.1.1-2_all.deb ... 2026-03-09T20:16:58.245 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-iniconfig (1.1.1-2) ... 2026-03-09T20:16:58.254 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T20:16:58.263 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-pastescript. 2026-03-09T20:16:58.269 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../096-python3-pastescript_2.0.2-4_all.deb ... 2026-03-09T20:16:58.270 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-pastescript (2.0.2-4) ... 2026-03-09T20:16:58.292 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-pluggy. 2026-03-09T20:16:58.298 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../097-python3-pluggy_0.13.0-7.1_all.deb ... 2026-03-09T20:16:58.299 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-pluggy (0.13.0-7.1) ... 2026-03-09T20:16:58.318 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-psutil. 2026-03-09T20:16:58.323 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T20:16:58.324 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../098-python3-psutil_5.9.0-1build1_amd64.deb ... 2026-03-09T20:16:58.325 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-psutil (5.9.0-1build1) ... 2026-03-09T20:16:58.348 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-py. 2026-03-09T20:16:58.354 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../099-python3-py_1.10.0-1_all.deb ... 2026-03-09T20:16:58.355 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-py (1.10.0-1) ... 2026-03-09T20:16:58.371 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T20:16:58.380 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-pygments. 2026-03-09T20:16:58.386 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../100-python3-pygments_2.11.2+dfsg-2ubuntu0.1_all.deb ... 2026-03-09T20:16:58.387 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T20:16:58.394 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T20:16:58.396 INFO:teuthology.orchestra.run.vm03.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T20:16:58.399 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T20:16:58.440 INFO:teuthology.orchestra.run.vm04.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T20:16:58.448 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T20:16:58.453 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-pyinotify. 2026-03-09T20:16:58.455 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../101-python3-pyinotify_0.9.6-1.3_all.deb ... 2026-03-09T20:16:58.456 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-pyinotify (0.9.6-1.3) ... 2026-03-09T20:16:58.472 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-toml. 2026-03-09T20:16:58.479 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../102-python3-toml_0.10.2-1_all.deb ... 2026-03-09T20:16:58.480 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-toml (0.10.2-1) ... 2026-03-09T20:16:58.491 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T20:16:58.499 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-pytest. 2026-03-09T20:16:58.504 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../103-python3-pytest_6.2.5-1ubuntu2_all.deb ... 2026-03-09T20:16:58.505 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T20:16:58.518 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T20:16:58.535 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-simplejson. 2026-03-09T20:16:58.539 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../104-python3-simplejson_3.17.6-1build1_amd64.deb ... 2026-03-09T20:16:58.540 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-simplejson (3.17.6-1build1) ... 2026-03-09T20:16:58.560 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package qttranslations5-l10n. 2026-03-09T20:16:58.565 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../105-qttranslations5-l10n_5.15.3-1_all.deb ... 2026-03-09T20:16:58.565 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking qttranslations5-l10n (5.15.3-1) ... 2026-03-09T20:16:58.586 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:58.648 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T20:16:58.659 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T20:16:58.661 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T20:16:58.663 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T20:16:58.665 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T20:16:58.667 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T20:16:58.669 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T20:16:58.672 INFO:teuthology.orchestra.run.vm04.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T20:16:58.674 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package radosgw. 2026-03-09T20:16:58.674 INFO:teuthology.orchestra.run.vm04.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T20:16:58.676 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T20:16:58.678 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T20:16:58.680 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../106-radosgw_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:58.680 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:58.779 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T20:16:58.841 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T20:16:58.866 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T20:16:58.881 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package rbd-fuse. 2026-03-09T20:16:58.882 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../107-rbd-fuse_19.2.3-678-ge911bdeb-1jammy_amd64.deb ... 2026-03-09T20:16:58.883 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:58.901 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package smartmontools. 2026-03-09T20:16:58.906 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../108-smartmontools_7.2-1ubuntu0.1_amd64.deb ... 2026-03-09T20:16:58.913 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T20:16:58.933 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T20:16:58.953 INFO:teuthology.orchestra.run.vm08.stdout:Setting up smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T20:16:58.978 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T20:16:59.000 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T20:16:59.040 INFO:teuthology.orchestra.run.vm03.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T20:16:59.042 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:16:59.077 INFO:teuthology.orchestra.run.vm04.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T20:16:59.079 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T20:16:59.136 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T20:16:59.200 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/smartd.service → /lib/systemd/system/smartmontools.service. 2026-03-09T20:16:59.200 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/smartmontools.service → /lib/systemd/system/smartmontools.service. 2026-03-09T20:16:59.349 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T20:16:59.416 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T20:16:59.418 INFO:teuthology.orchestra.run.vm04.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T20:16:59.420 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T20:16:59.506 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T20:16:59.620 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-iniconfig (1.1.1-2) ... 2026-03-09T20:16:59.636 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T20:16:59.680 INFO:teuthology.orchestra.run.vm03.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T20:16:59.682 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T20:16:59.684 INFO:teuthology.orchestra.run.vm08.stdout:Setting up nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T20:16:59.701 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:16:59.705 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T20:16:59.743 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmefc-boot-connections.service → /lib/systemd/system/nvmefc-boot-connections.service. 2026-03-09T20:16:59.761 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T20:16:59.772 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T20:16:59.774 INFO:teuthology.orchestra.run.vm03.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T20:16:59.776 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T20:16:59.845 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T20:16:59.851 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T20:16:59.908 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:16:59.909 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T20:16:59.964 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T20:16:59.969 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/default.target.wants/nvmf-autoconnect.service → /lib/systemd/system/nvmf-autoconnect.service. 2026-03-09T20:16:59.977 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T20:17:00.039 INFO:teuthology.orchestra.run.vm04.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T20:17:00.041 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:00.059 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T20:17:00.129 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T20:17:00.131 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T20:17:00.198 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T20:17:00.262 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T20:17:00.336 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T20:17:00.338 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T20:17:00.352 INFO:teuthology.orchestra.run.vm08.stdout:nvmf-connect.target is a disabled or a static unit, not starting it. 2026-03-09T20:17:00.358 INFO:teuthology.orchestra.run.vm08.stdout:Could not execute systemctl: at /usr/bin/deb-systemd-invoke line 142. 2026-03-09T20:17:00.360 INFO:teuthology.orchestra.run.vm08.stdout:Setting up cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:00.401 INFO:teuthology.orchestra.run.vm08.stdout:Adding system user cephadm....done 2026-03-09T20:17:00.409 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T20:17:00.417 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T20:17:00.420 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T20:17:00.481 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-jaraco.classes (3.2.1-3) ... 2026-03-09T20:17:00.489 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T20:17:00.545 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T20:17:00.547 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-jaraco.functools (3.4.0-2) ... 2026-03-09T20:17:00.574 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T20:17:00.611 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-repoze.lru (0.7-2) ... 2026-03-09T20:17:00.667 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T20:17:00.682 INFO:teuthology.orchestra.run.vm08.stdout:Setting up liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T20:17:00.683 INFO:teuthology.orchestra.run.vm04.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T20:17:00.684 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-py (1.10.0-1) ... 2026-03-09T20:17:00.704 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:17:00.709 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T20:17:00.736 INFO:teuthology.orchestra.run.vm03.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T20:17:00.738 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T20:17:00.741 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T20:17:00.743 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T20:17:00.777 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T20:17:00.781 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T20:17:00.783 INFO:teuthology.orchestra.run.vm04.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T20:17:00.785 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T20:17:00.851 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T20:17:00.883 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T20:17:00.902 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-cachetools (5.0.0-1) ... 2026-03-09T20:17:00.918 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:17:00.920 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T20:17:00.959 INFO:teuthology.orchestra.run.vm03.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T20:17:00.961 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T20:17:00.974 INFO:teuthology.orchestra.run.vm08.stdout:Setting up unzip (6.0-26ubuntu3.2) ... 2026-03-09T20:17:00.983 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-pyinotify (0.9.6-1.3) ... 2026-03-09T20:17:00.993 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T20:17:01.029 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:17:01.031 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T20:17:01.053 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-threadpoolctl (3.1.0-1) ... 2026-03-09T20:17:01.060 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T20:17:01.108 INFO:teuthology.orchestra.run.vm03.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T20:17:01.110 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T20:17:01.119 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:01.130 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T20:17:01.183 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T20:17:01.188 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T20:17:01.191 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libnbd0 (1.10.5-1) ... 2026-03-09T20:17:01.193 INFO:teuthology.orchestra.run.vm08.stdout:Setting up lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T20:17:01.195 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T20:17:01.195 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T20:17:01.197 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T20:17:01.199 INFO:teuthology.orchestra.run.vm08.stdout:Setting up lua5.1 (5.1.5-8.1build4) ... 2026-03-09T20:17:01.203 INFO:teuthology.orchestra.run.vm08.stdout:update-alternatives: using /usr/bin/lua5.1 to provide /usr/bin/lua (lua-interpreter) in auto mode 2026-03-09T20:17:01.205 INFO:teuthology.orchestra.run.vm08.stdout:update-alternatives: using /usr/bin/luac5.1 to provide /usr/bin/luac (lua-compiler) in auto mode 2026-03-09T20:17:01.207 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T20:17:01.209 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-psutil (5.9.0-1build1) ... 2026-03-09T20:17:01.257 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T20:17:01.312 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T20:17:01.323 INFO:teuthology.orchestra.run.vm04.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T20:17:01.325 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T20:17:01.330 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-natsort (8.0.2-1) ... 2026-03-09T20:17:01.397 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T20:17:01.398 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T20:17:01.404 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T20:17:01.406 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T20:17:01.467 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-simplejson (3.17.6-1build1) ... 2026-03-09T20:17:01.477 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T20:17:01.509 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T20:17:01.511 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:01.513 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:01.515 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T20:17:01.546 INFO:teuthology.orchestra.run.vm08.stdout:Setting up zip (3.0-12build2) ... 2026-03-09T20:17:01.548 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-pygments (2.11.2+dfsg-2ubuntu0.1) ... 2026-03-09T20:17:01.558 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T20:17:01.646 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T20:17:01.713 INFO:teuthology.orchestra.run.vm04.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T20:17:01.715 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T20:17:01.717 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T20:17:01.719 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T20:17:01.814 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T20:17:01.847 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T20:17:01.882 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T20:17:01.884 INFO:teuthology.orchestra.run.vm08.stdout:Setting up qttranslations5-l10n (5.15.3-1) ... 2026-03-09T20:17:01.886 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T20:17:01.920 INFO:teuthology.orchestra.run.vm04.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T20:17:01.922 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T20:17:01.977 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T20:17:01.984 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:17:01.985 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T20:17:02.057 INFO:teuthology.orchestra.run.vm04.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T20:17:02.059 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T20:17:02.064 INFO:teuthology.orchestra.run.vm03.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T20:17:02.069 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.072 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.074 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.076 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.078 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.107 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T20:17:02.129 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T20:17:02.141 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T20:17:02.141 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T20:17:02.228 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T20:17:02.256 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T20:17:02.314 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T20:17:02.336 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T20:17:02.426 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-jaraco.text (3.6.0-2) ... 2026-03-09T20:17:02.441 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T20:17:02.444 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.445 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.448 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T20:17:02.492 INFO:teuthology.orchestra.run.vm03.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.495 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.498 INFO:teuthology.orchestra.run.vm03.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.500 INFO:teuthology.orchestra.run.vm08.stdout:Setting up socat (1.7.4.1-3ubuntu4) ... 2026-03-09T20:17:02.500 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.502 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.502 INFO:teuthology.orchestra.run.vm03.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.505 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.507 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.509 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:02.547 INFO:teuthology.orchestra.run.vm03.stdout:Adding group ceph....done 2026-03-09T20:17:02.579 INFO:teuthology.orchestra.run.vm03.stdout:Adding system user ceph....done 2026-03-09T20:17:02.585 INFO:teuthology.orchestra.run.vm03.stdout:Setting system user ceph properties....done 2026-03-09T20:17:02.589 INFO:teuthology.orchestra.run.vm03.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-09T20:17:02.591 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T20:17:02.657 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-09T20:17:02.915 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T20:17:03.029 INFO:teuthology.orchestra.run.vm04.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T20:17:03.035 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.037 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.038 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.040 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.042 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.107 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T20:17:03.107 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T20:17:03.147 INFO:teuthology.orchestra.run.vm08.stdout:Setting up pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T20:17:03.168 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:17:03.174 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-toml (0.10.2-1) ... 2026-03-09T20:17:03.254 INFO:teuthology.orchestra.run.vm08.stdout:Setting up librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T20:17:03.288 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.291 INFO:teuthology.orchestra.run.vm08.stdout:Setting up xmlstarlet (1.6.1-2.1) ... 2026-03-09T20:17:03.291 INFO:teuthology.orchestra.run.vm03.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.294 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-pluggy (0.13.0-7.1) ... 2026-03-09T20:17:03.359 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-zc.lockfile (2.0-1) ... 2026-03-09T20:17:03.425 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:17:03.427 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-rsa (4.8-1) ... 2026-03-09T20:17:03.438 INFO:teuthology.orchestra.run.vm04.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.440 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.441 INFO:teuthology.orchestra.run.vm04.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.443 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.445 INFO:teuthology.orchestra.run.vm04.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.447 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.449 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.451 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.485 INFO:teuthology.orchestra.run.vm04.stdout:Adding group ceph....done 2026-03-09T20:17:03.500 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-singledispatch (3.4.0.3-3) ... 2026-03-09T20:17:03.511 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T20:17:03.511 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T20:17:03.519 INFO:teuthology.orchestra.run.vm04.stdout:Adding system user ceph....done 2026-03-09T20:17:03.526 INFO:teuthology.orchestra.run.vm04.stdout:Setting system user ceph properties....done 2026-03-09T20:17:03.531 INFO:teuthology.orchestra.run.vm04.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-09T20:17:03.564 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-logutils (0.3.3-8) ... 2026-03-09T20:17:03.593 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-09T20:17:03.633 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-tempora (4.1.2-1) ... 2026-03-09T20:17:03.700 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-simplegeneric (0.8.1-3) ... 2026-03-09T20:17:03.762 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-prettytable (2.5.0-2) ... 2026-03-09T20:17:03.800 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T20:17:03.832 INFO:teuthology.orchestra.run.vm08.stdout:Setting up liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T20:17:03.834 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-websocket (1.2.3-1) ... 2026-03-09T20:17:03.904 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:03.913 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T20:17:03.915 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T20:17:03.985 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T20:17:03.985 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T20:17:04.068 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T20:17:04.149 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:04.152 INFO:teuthology.orchestra.run.vm04.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:04.161 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-jaraco.collections (3.4.0-2) ... 2026-03-09T20:17:04.235 INFO:teuthology.orchestra.run.vm08.stdout:Setting up liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T20:17:04.238 INFO:teuthology.orchestra.run.vm08.stdout:Setting up lua-sec:amd64 (1.0.2-1) ... 2026-03-09T20:17:04.240 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T20:17:04.243 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-pytest (6.2.5-1ubuntu2) ... 2026-03-09T20:17:04.375 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:04.379 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-pastedeploy (2.1.1-1) ... 2026-03-09T20:17:04.412 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T20:17:04.412 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T20:17:04.439 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T20:17:04.439 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T20:17:04.450 INFO:teuthology.orchestra.run.vm08.stdout:Setting up lua-any (27ubuntu1) ... 2026-03-09T20:17:04.452 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-portend (3.0.0-1) ... 2026-03-09T20:17:04.518 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:17:04.521 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-google-auth (1.5.1-3) ... 2026-03-09T20:17:04.596 INFO:teuthology.orchestra.run.vm08.stdout:Setting up jq (1.6-2.1ubuntu3.1) ... 2026-03-09T20:17:04.598 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-webtest (2.0.35-1) ... 2026-03-09T20:17:04.671 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-cherrypy3 (18.6.1-4) ... 2026-03-09T20:17:04.780 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:04.801 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-pastescript (2.0.2-4) ... 2026-03-09T20:17:04.810 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:04.866 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T20:17:04.871 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T20:17:04.871 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T20:17:04.887 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T20:17:04.995 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T20:17:04.997 INFO:teuthology.orchestra.run.vm08.stdout:Setting up librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:05.000 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:05.002 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T20:17:05.210 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:05.261 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:05.289 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T20:17:05.289 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T20:17:05.328 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T20:17:05.328 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T20:17:05.547 INFO:teuthology.orchestra.run.vm08.stdout:Setting up luarocks (3.8.0+dfsg1-1) ... 2026-03-09T20:17:05.558 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:05.560 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:05.562 INFO:teuthology.orchestra.run.vm08.stdout:Setting up librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:05.565 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:05.568 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:05.614 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:05.616 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:05.623 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/remote-fs.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T20:17:05.623 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-fuse.target → /lib/systemd/system/ceph-fuse.target. 2026-03-09T20:17:05.628 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:05.672 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:05.683 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T20:17:05.683 INFO:teuthology.orchestra.run.vm03.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T20:17:05.738 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T20:17:05.738 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T20:17:06.003 INFO:teuthology.orchestra.run.vm08.stdout:Setting up libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.005 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.008 INFO:teuthology.orchestra.run.vm08.stdout:Setting up librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.010 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.013 INFO:teuthology.orchestra.run.vm08.stdout:Setting up rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.015 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.017 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.020 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.024 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.037 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.039 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.050 INFO:teuthology.orchestra.run.vm03.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.053 INFO:teuthology.orchestra.run.vm08.stdout:Adding group ceph....done 2026-03-09T20:17:06.087 INFO:teuthology.orchestra.run.vm08.stdout:Adding system user ceph....done 2026-03-09T20:17:06.095 INFO:teuthology.orchestra.run.vm08.stdout:Setting system user ceph properties....done 2026-03-09T20:17:06.098 INFO:teuthology.orchestra.run.vm08.stdout:chown: cannot access '/var/log/ceph/*.log*': No such file or directory 2026-03-09T20:17:06.118 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.160 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /lib/systemd/system/ceph.target. 2026-03-09T20:17:06.162 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T20:17:06.169 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:17:06.182 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:17:06.190 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T20:17:06.190 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T20:17:06.258 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T20:17:06.408 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/rbdmap.service → /lib/systemd/system/rbdmap.service. 2026-03-09T20:17:06.562 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.565 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.573 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:17:06.573 INFO:teuthology.orchestra.run.vm03.stdout:Running kernel seems to be up-to-date. 2026-03-09T20:17:06.573 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:17:06.573 INFO:teuthology.orchestra.run.vm03.stdout:Services to be restarted: 2026-03-09T20:17:06.578 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.580 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart packagekit.service 2026-03-09T20:17:06.582 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:17:06.583 INFO:teuthology.orchestra.run.vm03.stdout:Service restarts being deferred: 2026-03-09T20:17:06.583 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T20:17:06.583 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart unattended-upgrades.service 2026-03-09T20:17:06.583 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:17:06.583 INFO:teuthology.orchestra.run.vm03.stdout:No containers need to be restarted. 2026-03-09T20:17:06.583 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:17:06.583 INFO:teuthology.orchestra.run.vm03.stdout:No user sessions are running outdated binaries. 2026-03-09T20:17:06.583 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:17:06.583 INFO:teuthology.orchestra.run.vm03.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T20:17:06.634 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T20:17:06.634 INFO:teuthology.orchestra.run.vm04.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T20:17:06.808 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:06.810 INFO:teuthology.orchestra.run.vm08.stdout:Setting up radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:07.024 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:07.037 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:07.039 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:07.052 INFO:teuthology.orchestra.run.vm04.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:07.069 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T20:17:07.069 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-radosgw.target → /lib/systemd/system/ceph-radosgw.target. 2026-03-09T20:17:07.167 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T20:17:07.174 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:17:07.189 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:17:07.268 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T20:17:07.446 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:17:07.449 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-09T20:17:07.454 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:07.527 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:17:07.540 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-crash.service → /lib/systemd/system/ceph-crash.service. 2026-03-09T20:17:07.592 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T20:17:07.592 INFO:teuthology.orchestra.run.vm04.stdout:Running kernel seems to be up-to-date. 2026-03-09T20:17:07.592 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T20:17:07.593 INFO:teuthology.orchestra.run.vm04.stdout:Services to be restarted: 2026-03-09T20:17:07.599 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart packagekit.service 2026-03-09T20:17:07.601 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T20:17:07.601 INFO:teuthology.orchestra.run.vm04.stdout:Service restarts being deferred: 2026-03-09T20:17:07.601 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T20:17:07.601 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart unattended-upgrades.service 2026-03-09T20:17:07.601 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T20:17:07.602 INFO:teuthology.orchestra.run.vm04.stdout:No containers need to be restarted. 2026-03-09T20:17:07.602 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T20:17:07.602 INFO:teuthology.orchestra.run.vm04.stdout:No user sessions are running outdated binaries. 2026-03-09T20:17:07.602 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T20:17:07.602 INFO:teuthology.orchestra.run.vm04.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T20:17:07.734 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:17:07.734 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:17:07.889 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:17:07.890 INFO:teuthology.orchestra.run.vm03.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:17:07.890 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T20:17:07.890 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:17:07.905 INFO:teuthology.orchestra.run.vm03.stdout:The following NEW packages will be installed: 2026-03-09T20:17:07.905 INFO:teuthology.orchestra.run.vm03.stdout: python3-jmespath python3-xmltodict 2026-03-09T20:17:07.923 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:07.986 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T20:17:07.986 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mds.target → /lib/systemd/system/ceph-mds.target. 2026-03-09T20:17:08.340 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:08.397 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:17:08.397 INFO:teuthology.orchestra.run.vm03.stdout:Need to get 34.3 kB of archives. 2026-03-09T20:17:08.397 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-09T20:17:08.397 INFO:teuthology.orchestra.run.vm03.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T20:17:08.397 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T20:17:08.397 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mgr.target → /lib/systemd/system/ceph-mgr.target. 2026-03-09T20:17:08.438 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:17:08.441 DEBUG:teuthology.orchestra.run.vm04:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-09T20:17:08.480 INFO:teuthology.orchestra.run.vm03.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T20:17:08.520 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:17:08.664 INFO:teuthology.orchestra.run.vm03.stdout:Fetched 34.3 kB in 1s (59.6 kB/s) 2026-03-09T20:17:08.676 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T20:17:08.687 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:17:08.688 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:17:08.702 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T20:17:08.705 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T20:17:08.706 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T20:17:08.721 INFO:teuthology.orchestra.run.vm03.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T20:17:08.726 INFO:teuthology.orchestra.run.vm03.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T20:17:08.727 INFO:teuthology.orchestra.run.vm03.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T20:17:08.751 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T20:17:08.815 INFO:teuthology.orchestra.run.vm03.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T20:17:08.815 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:08.820 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:17:08.821 INFO:teuthology.orchestra.run.vm04.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:17:08.821 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T20:17:08.821 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:17:08.834 INFO:teuthology.orchestra.run.vm04.stdout:The following NEW packages will be installed: 2026-03-09T20:17:08.834 INFO:teuthology.orchestra.run.vm04.stdout: python3-jmespath python3-xmltodict 2026-03-09T20:17:08.891 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T20:17:08.891 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-osd.target → /lib/systemd/system/ceph-osd.target. 2026-03-09T20:17:09.124 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:17:09.124 INFO:teuthology.orchestra.run.vm03.stdout:Running kernel seems to be up-to-date. 2026-03-09T20:17:09.124 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:17:09.124 INFO:teuthology.orchestra.run.vm03.stdout:Services to be restarted: 2026-03-09T20:17:09.129 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart packagekit.service 2026-03-09T20:17:09.132 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:17:09.132 INFO:teuthology.orchestra.run.vm03.stdout:Service restarts being deferred: 2026-03-09T20:17:09.132 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T20:17:09.132 INFO:teuthology.orchestra.run.vm03.stdout: systemctl restart unattended-upgrades.service 2026-03-09T20:17:09.132 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:17:09.132 INFO:teuthology.orchestra.run.vm03.stdout:No containers need to be restarted. 2026-03-09T20:17:09.132 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:17:09.132 INFO:teuthology.orchestra.run.vm03.stdout:No user sessions are running outdated binaries. 2026-03-09T20:17:09.132 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:17:09.132 INFO:teuthology.orchestra.run.vm03.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T20:17:09.246 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:09.248 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:09.261 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:09.299 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:17:09.299 INFO:teuthology.orchestra.run.vm04.stdout:Need to get 34.3 kB of archives. 2026-03-09T20:17:09.299 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-09T20:17:09.299 INFO:teuthology.orchestra.run.vm04.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T20:17:09.321 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/multi-user.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T20:17:09.321 INFO:teuthology.orchestra.run.vm08.stdout:Created symlink /etc/systemd/system/ceph.target.wants/ceph-mon.target → /lib/systemd/system/ceph-mon.target. 2026-03-09T20:17:09.522 INFO:teuthology.orchestra.run.vm04.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T20:17:09.654 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:09.666 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:09.668 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:09.681 INFO:teuthology.orchestra.run.vm08.stdout:Setting up ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:17:09.713 INFO:teuthology.orchestra.run.vm04.stdout:Fetched 34.3 kB in 1s (49.4 kB/s) 2026-03-09T20:17:09.919 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T20:17:09.926 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T20:17:09.927 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:17:09.930 DEBUG:teuthology.parallel:result is None 2026-03-09T20:17:09.932 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:17:09.946 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:17:09.955 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T20:17:09.957 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T20:17:09.958 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T20:17:09.977 INFO:teuthology.orchestra.run.vm04.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T20:17:09.983 INFO:teuthology.orchestra.run.vm04.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T20:17:09.983 INFO:teuthology.orchestra.run.vm04.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T20:17:10.014 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T20:17:10.028 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for install-info (6.8-4build1) ... 2026-03-09T20:17:10.084 INFO:teuthology.orchestra.run.vm04.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T20:17:10.352 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:17:10.352 INFO:teuthology.orchestra.run.vm08.stdout:Running kernel seems to be up-to-date. 2026-03-09T20:17:10.352 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:17:10.352 INFO:teuthology.orchestra.run.vm08.stdout:Services to be restarted: 2026-03-09T20:17:10.357 INFO:teuthology.orchestra.run.vm08.stdout: systemctl restart packagekit.service 2026-03-09T20:17:10.359 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:17:10.360 INFO:teuthology.orchestra.run.vm08.stdout:Service restarts being deferred: 2026-03-09T20:17:10.360 INFO:teuthology.orchestra.run.vm08.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T20:17:10.360 INFO:teuthology.orchestra.run.vm08.stdout: systemctl restart unattended-upgrades.service 2026-03-09T20:17:10.360 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:17:10.360 INFO:teuthology.orchestra.run.vm08.stdout:No containers need to be restarted. 2026-03-09T20:17:10.360 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:17:10.360 INFO:teuthology.orchestra.run.vm08.stdout:No user sessions are running outdated binaries. 2026-03-09T20:17:10.360 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:17:10.360 INFO:teuthology.orchestra.run.vm08.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T20:17:10.409 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T20:17:10.409 INFO:teuthology.orchestra.run.vm04.stdout:Running kernel seems to be up-to-date. 2026-03-09T20:17:10.409 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T20:17:10.409 INFO:teuthology.orchestra.run.vm04.stdout:Services to be restarted: 2026-03-09T20:17:10.414 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart packagekit.service 2026-03-09T20:17:10.416 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T20:17:10.416 INFO:teuthology.orchestra.run.vm04.stdout:Service restarts being deferred: 2026-03-09T20:17:10.416 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T20:17:10.416 INFO:teuthology.orchestra.run.vm04.stdout: systemctl restart unattended-upgrades.service 2026-03-09T20:17:10.416 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T20:17:10.416 INFO:teuthology.orchestra.run.vm04.stdout:No containers need to be restarted. 2026-03-09T20:17:10.416 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T20:17:10.416 INFO:teuthology.orchestra.run.vm04.stdout:No user sessions are running outdated binaries. 2026-03-09T20:17:10.416 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T20:17:10.416 INFO:teuthology.orchestra.run.vm04.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T20:17:11.210 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:17:11.213 DEBUG:teuthology.orchestra.run.vm08:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install python3-xmltodict python3-jmespath 2026-03-09T20:17:11.288 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:17:11.296 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:17:11.300 DEBUG:teuthology.parallel:result is None 2026-03-09T20:17:11.468 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:17:11.469 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:17:11.577 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:17:11.577 INFO:teuthology.orchestra.run.vm08.stdout: kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:17:11.577 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T20:17:11.577 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:17:11.589 INFO:teuthology.orchestra.run.vm08.stdout:The following NEW packages will be installed: 2026-03-09T20:17:11.589 INFO:teuthology.orchestra.run.vm08.stdout: python3-jmespath python3-xmltodict 2026-03-09T20:17:11.672 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 2 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:17:11.672 INFO:teuthology.orchestra.run.vm08.stdout:Need to get 34.3 kB of archives. 2026-03-09T20:17:11.672 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 146 kB of additional disk space will be used. 2026-03-09T20:17:11.672 INFO:teuthology.orchestra.run.vm08.stdout:Get:1 https://archive.ubuntu.com/ubuntu jammy/main amd64 python3-jmespath all 0.10.0-1 [21.7 kB] 2026-03-09T20:17:11.688 INFO:teuthology.orchestra.run.vm08.stdout:Get:2 https://archive.ubuntu.com/ubuntu jammy/universe amd64 python3-xmltodict all 0.12.0-2 [12.6 kB] 2026-03-09T20:17:11.861 INFO:teuthology.orchestra.run.vm08.stdout:Fetched 34.3 kB in 0s (354 kB/s) 2026-03-09T20:17:11.962 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-jmespath. 2026-03-09T20:17:11.989 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118577 files and directories currently installed.) 2026-03-09T20:17:11.991 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../python3-jmespath_0.10.0-1_all.deb ... 2026-03-09T20:17:11.992 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-jmespath (0.10.0-1) ... 2026-03-09T20:17:12.007 INFO:teuthology.orchestra.run.vm08.stdout:Selecting previously unselected package python3-xmltodict. 2026-03-09T20:17:12.012 INFO:teuthology.orchestra.run.vm08.stdout:Preparing to unpack .../python3-xmltodict_0.12.0-2_all.deb ... 2026-03-09T20:17:12.013 INFO:teuthology.orchestra.run.vm08.stdout:Unpacking python3-xmltodict (0.12.0-2) ... 2026-03-09T20:17:12.038 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-xmltodict (0.12.0-2) ... 2026-03-09T20:17:12.100 INFO:teuthology.orchestra.run.vm08.stdout:Setting up python3-jmespath (0.10.0-1) ... 2026-03-09T20:17:12.428 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:17:12.428 INFO:teuthology.orchestra.run.vm08.stdout:Running kernel seems to be up-to-date. 2026-03-09T20:17:12.428 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:17:12.428 INFO:teuthology.orchestra.run.vm08.stdout:Services to be restarted: 2026-03-09T20:17:12.433 INFO:teuthology.orchestra.run.vm08.stdout: systemctl restart packagekit.service 2026-03-09T20:17:12.436 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:17:12.436 INFO:teuthology.orchestra.run.vm08.stdout:Service restarts being deferred: 2026-03-09T20:17:12.436 INFO:teuthology.orchestra.run.vm08.stdout: systemctl restart networkd-dispatcher.service 2026-03-09T20:17:12.436 INFO:teuthology.orchestra.run.vm08.stdout: systemctl restart unattended-upgrades.service 2026-03-09T20:17:12.436 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:17:12.436 INFO:teuthology.orchestra.run.vm08.stdout:No containers need to be restarted. 2026-03-09T20:17:12.436 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:17:12.436 INFO:teuthology.orchestra.run.vm08.stdout:No user sessions are running outdated binaries. 2026-03-09T20:17:12.436 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:17:12.436 INFO:teuthology.orchestra.run.vm08.stdout:No VM guests are running outdated hypervisor (qemu) binaries on this host. 2026-03-09T20:17:13.294 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:17:13.297 DEBUG:teuthology.parallel:result is None 2026-03-09T20:17:13.297 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:17:13.855 DEBUG:teuthology.orchestra.run.vm03:> dpkg-query -W -f '${Version}' ceph 2026-03-09T20:17:13.864 INFO:teuthology.orchestra.run.vm03.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T20:17:13.864 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T20:17:13.864 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T20:17:13.865 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:17:14.543 DEBUG:teuthology.orchestra.run.vm04:> dpkg-query -W -f '${Version}' ceph 2026-03-09T20:17:14.552 INFO:teuthology.orchestra.run.vm04.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T20:17:14.552 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T20:17:14.552 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T20:17:14.553 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:17:15.192 DEBUG:teuthology.orchestra.run.vm08:> dpkg-query -W -f '${Version}' ceph 2026-03-09T20:17:15.200 INFO:teuthology.orchestra.run.vm08.stdout:19.2.3-678-ge911bdeb-1jammy 2026-03-09T20:17:15.200 INFO:teuthology.packaging:The installed version of ceph is 19.2.3-678-ge911bdeb-1jammy 2026-03-09T20:17:15.201 INFO:teuthology.task.install:The correct ceph version 19.2.3-678-ge911bdeb-1jammy is installed. 2026-03-09T20:17:15.201 INFO:teuthology.task.install.util:Shipping valgrind.supp... 2026-03-09T20:17:15.202 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T20:17:15.202 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T20:17:15.211 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T20:17:15.211 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T20:17:15.219 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T20:17:15.219 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/home/ubuntu/cephtest/valgrind.supp 2026-03-09T20:17:15.249 INFO:teuthology.task.install.util:Shipping 'daemon-helper'... 2026-03-09T20:17:15.249 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T20:17:15.249 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T20:17:15.260 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T20:17:15.308 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T20:17:15.308 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T20:17:15.316 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T20:17:15.365 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T20:17:15.365 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/daemon-helper 2026-03-09T20:17:15.373 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/daemon-helper 2026-03-09T20:17:15.420 INFO:teuthology.task.install.util:Shipping 'adjust-ulimits'... 2026-03-09T20:17:15.421 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T20:17:15.421 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T20:17:15.428 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T20:17:15.475 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T20:17:15.475 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T20:17:15.485 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T20:17:15.533 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T20:17:15.534 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/adjust-ulimits 2026-03-09T20:17:15.542 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/adjust-ulimits 2026-03-09T20:17:15.592 INFO:teuthology.task.install.util:Shipping 'stdin-killer'... 2026-03-09T20:17:15.592 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T20:17:15.592 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T20:17:15.600 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T20:17:15.647 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T20:17:15.647 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T20:17:15.656 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T20:17:15.706 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T20:17:15.706 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/usr/bin/stdin-killer 2026-03-09T20:17:15.713 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod a=rx -- /usr/bin/stdin-killer 2026-03-09T20:17:15.761 INFO:teuthology.run_tasks:Running task cephadm... 2026-03-09T20:17:15.805 INFO:tasks.cephadm:Config: {'conf': {'global': {'mon election default strategy': 1}, 'mgr': {'debug mgr': 20, 'debug ms': 1, 'mgr/cephadm/use_agent': False}, 'mon': {'debug mon': 20, 'debug ms': 1, 'debug paxos': 20}, 'osd': {'debug ms': 1, 'debug osd': 20, 'osd mclock iops capacity threshold hdd': 49000}}, 'flavor': 'default', 'log-ignorelist': ['\\(MDS_ALL_DOWN\\)', '\\(MDS_UP_LESS_THAN_MAX\\)', 'MON_DOWN', 'mons down', 'mon down', 'out of quorum', 'CEPHADM_STRAY_DAEMON', 'CEPHADM_FAILED_DAEMON'], 'log-only-match': ['CEPHADM_'], 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df'} 2026-03-09T20:17:15.805 INFO:tasks.cephadm:Cluster image is quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:17:15.806 INFO:tasks.cephadm:Cluster fsid is f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:17:15.806 INFO:tasks.cephadm:Choosing monitor IPs and ports... 2026-03-09T20:17:15.806 INFO:tasks.cephadm:Monitor IPs: {'mon.a': '192.168.123.103', 'mon.b': '192.168.123.104', 'mon.c': '192.168.123.108'} 2026-03-09T20:17:15.806 INFO:tasks.cephadm:First mon is mon.a on vm03 2026-03-09T20:17:15.806 INFO:tasks.cephadm:First mgr is a 2026-03-09T20:17:15.806 INFO:tasks.cephadm:Normalizing hostnames... 2026-03-09T20:17:15.806 DEBUG:teuthology.orchestra.run.vm03:> sudo hostname $(hostname -s) 2026-03-09T20:17:15.814 DEBUG:teuthology.orchestra.run.vm04:> sudo hostname $(hostname -s) 2026-03-09T20:17:15.822 DEBUG:teuthology.orchestra.run.vm08:> sudo hostname $(hostname -s) 2026-03-09T20:17:15.830 INFO:tasks.cephadm:Downloading "compiled" cephadm from cachra 2026-03-09T20:17:15.830 DEBUG:teuthology.packaging:Querying https://shaman.ceph.com/api/search?status=ready&project=ceph&flavor=default&distros=ubuntu%2F22.04%2Fx86_64&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:17:16.437 INFO:tasks.cephadm:builder_project result: [{'url': 'https://1.chacra.ceph.com/r/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'chacra_url': 'https://1.chacra.ceph.com/repos/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/flavors/default/', 'ref': 'squid', 'sha1': 'e911bdebe5c8faa3800735d1568fcdca65db60df', 'distro': 'ubuntu', 'distro_version': '22.04', 'distro_codename': 'jammy', 'modified': '2026-02-25 19:37:07.680480', 'status': 'ready', 'flavor': 'default', 'project': 'ceph', 'archs': ['x86_64'], 'extra': {'version': '19.2.3-678-ge911bdeb', 'package_manager_version': '19.2.3-678-ge911bdeb-1jammy', 'build_url': 'https://jenkins.ceph.com/job/ceph-dev-pipeline/3275/', 'root_build_cause': '', 'node_name': '10.20.192.98+toko08', 'job_name': 'ceph-dev-pipeline'}}] 2026-03-09T20:17:17.012 INFO:tasks.util.chacra:got chacra host 1.chacra.ceph.com, ref squid, sha1 e911bdebe5c8faa3800735d1568fcdca65db60df from https://shaman.ceph.com/api/search/?project=ceph&distros=ubuntu%2F22.04%2Fx86_64&flavor=default&sha1=e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:17:17.013 INFO:tasks.cephadm:Discovered cachra url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-09T20:17:17.013 INFO:tasks.cephadm:Downloading cephadm from url: https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm 2026-03-09T20:17:17.013 DEBUG:teuthology.orchestra.run.vm03:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T20:17:18.384 INFO:teuthology.orchestra.run.vm03.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 9 20:17 /home/ubuntu/cephtest/cephadm 2026-03-09T20:17:18.384 DEBUG:teuthology.orchestra.run.vm04:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T20:17:19.773 INFO:teuthology.orchestra.run.vm04.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 9 20:17 /home/ubuntu/cephtest/cephadm 2026-03-09T20:17:19.774 DEBUG:teuthology.orchestra.run.vm08:> curl --silent -L https://1.chacra.ceph.com/binaries/ceph/squid/e911bdebe5c8faa3800735d1568fcdca65db60df/ubuntu/jammy/x86_64/flavors/default/cephadm > /home/ubuntu/cephtest/cephadm && ls -l /home/ubuntu/cephtest/cephadm 2026-03-09T20:17:21.129 INFO:teuthology.orchestra.run.vm08.stdout:-rw-rw-r-- 1 ubuntu ubuntu 795696 Mar 9 20:17 /home/ubuntu/cephtest/cephadm 2026-03-09T20:17:21.129 DEBUG:teuthology.orchestra.run.vm03:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T20:17:21.133 DEBUG:teuthology.orchestra.run.vm04:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T20:17:21.137 DEBUG:teuthology.orchestra.run.vm08:> test -s /home/ubuntu/cephtest/cephadm && test $(stat -c%s /home/ubuntu/cephtest/cephadm) -gt 1000 && chmod +x /home/ubuntu/cephtest/cephadm 2026-03-09T20:17:21.146 INFO:tasks.cephadm:Pulling image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df on all hosts... 2026-03-09T20:17:21.146 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T20:17:21.177 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T20:17:21.179 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df pull 2026-03-09T20:17:21.267 INFO:teuthology.orchestra.run.vm03.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T20:17:21.268 INFO:teuthology.orchestra.run.vm04.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T20:17:21.279 INFO:teuthology.orchestra.run.vm08.stderr:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T20:18:07.798 INFO:teuthology.orchestra.run.vm04.stdout:{ 2026-03-09T20:18:07.799 INFO:teuthology.orchestra.run.vm04.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T20:18:07.799 INFO:teuthology.orchestra.run.vm04.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T20:18:07.799 INFO:teuthology.orchestra.run.vm04.stdout: "repo_digests": [ 2026-03-09T20:18:07.799 INFO:teuthology.orchestra.run.vm04.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T20:18:07.799 INFO:teuthology.orchestra.run.vm04.stdout: ] 2026-03-09T20:18:07.799 INFO:teuthology.orchestra.run.vm04.stdout:} 2026-03-09T20:18:10.949 INFO:teuthology.orchestra.run.vm03.stdout:{ 2026-03-09T20:18:10.949 INFO:teuthology.orchestra.run.vm03.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T20:18:10.949 INFO:teuthology.orchestra.run.vm03.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T20:18:10.949 INFO:teuthology.orchestra.run.vm03.stdout: "repo_digests": [ 2026-03-09T20:18:10.949 INFO:teuthology.orchestra.run.vm03.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T20:18:10.949 INFO:teuthology.orchestra.run.vm03.stdout: ] 2026-03-09T20:18:10.949 INFO:teuthology.orchestra.run.vm03.stdout:} 2026-03-09T20:18:28.217 INFO:teuthology.orchestra.run.vm08.stdout:{ 2026-03-09T20:18:28.217 INFO:teuthology.orchestra.run.vm08.stdout: "ceph_version": "ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable)", 2026-03-09T20:18:28.217 INFO:teuthology.orchestra.run.vm08.stdout: "image_id": "654f31e6858eb235bbece362255b685a945f2b6a367e2b88c4930c984fbb214c", 2026-03-09T20:18:28.218 INFO:teuthology.orchestra.run.vm08.stdout: "repo_digests": [ 2026-03-09T20:18:28.218 INFO:teuthology.orchestra.run.vm08.stdout: "quay.ceph.io/ceph-ci/ceph@sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc" 2026-03-09T20:18:28.218 INFO:teuthology.orchestra.run.vm08.stdout: ] 2026-03-09T20:18:28.218 INFO:teuthology.orchestra.run.vm08.stdout:} 2026-03-09T20:18:28.228 DEBUG:teuthology.orchestra.run.vm03:> sudo mkdir -p /etc/ceph 2026-03-09T20:18:28.243 DEBUG:teuthology.orchestra.run.vm04:> sudo mkdir -p /etc/ceph 2026-03-09T20:18:28.250 DEBUG:teuthology.orchestra.run.vm08:> sudo mkdir -p /etc/ceph 2026-03-09T20:18:28.259 DEBUG:teuthology.orchestra.run.vm03:> sudo chmod 777 /etc/ceph 2026-03-09T20:18:28.291 DEBUG:teuthology.orchestra.run.vm04:> sudo chmod 777 /etc/ceph 2026-03-09T20:18:28.302 DEBUG:teuthology.orchestra.run.vm08:> sudo chmod 777 /etc/ceph 2026-03-09T20:18:28.309 INFO:tasks.cephadm:Writing seed config... 2026-03-09T20:18:28.310 INFO:tasks.cephadm: override: [global] mon election default strategy = 1 2026-03-09T20:18:28.310 INFO:tasks.cephadm: override: [mgr] debug mgr = 20 2026-03-09T20:18:28.310 INFO:tasks.cephadm: override: [mgr] debug ms = 1 2026-03-09T20:18:28.310 INFO:tasks.cephadm: override: [mgr] mgr/cephadm/use_agent = False 2026-03-09T20:18:28.310 INFO:tasks.cephadm: override: [mon] debug mon = 20 2026-03-09T20:18:28.310 INFO:tasks.cephadm: override: [mon] debug ms = 1 2026-03-09T20:18:28.310 INFO:tasks.cephadm: override: [mon] debug paxos = 20 2026-03-09T20:18:28.310 INFO:tasks.cephadm: override: [osd] debug ms = 1 2026-03-09T20:18:28.310 INFO:tasks.cephadm: override: [osd] debug osd = 20 2026-03-09T20:18:28.310 INFO:tasks.cephadm: override: [osd] osd mclock iops capacity threshold hdd = 49000 2026-03-09T20:18:28.310 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T20:18:28.310 DEBUG:teuthology.orchestra.run.vm03:> dd of=/home/ubuntu/cephtest/seed.ceph.conf 2026-03-09T20:18:28.335 DEBUG:tasks.cephadm:Final config: [global] # make logging friendly to teuthology log_to_file = true log_to_stderr = false log to journald = false mon cluster log to file = true mon cluster log file level = debug mon clock drift allowed = 1.000 # replicate across OSDs, not hosts osd crush chooseleaf type = 0 #osd pool default size = 2 osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd # enable some debugging auth debug = true ms die on old message = true ms die on bug = true debug asserts on shutdown = true # adjust warnings mon max pg per osd = 10000# >= luminous mon pg warn max object skew = 0 mon osd allow primary affinity = true mon osd allow pg remap = true mon warn on legacy crush tunables = false mon warn on crush straw calc version zero = false mon warn on no sortbitwise = false mon warn on osd down out interval zero = false mon warn on too few osds = false mon_warn_on_pool_pg_num_not_power_of_two = false # disable pg_autoscaler by default for new pools osd_pool_default_pg_autoscale_mode = off # tests delete pools mon allow pool delete = true fsid = f72c9476-1bf4-11f1-9f3a-7162c3a72a6d mon election default strategy = 1 [osd] osd scrub load threshold = 5.0 osd scrub max interval = 600 osd mclock profile = high_recovery_ops osd recover clone overlap = true osd recovery max chunk = 1048576 osd deep scrub update digest min age = 30 osd map max advance = 10 osd memory target autotune = true # debugging osd debug shutdown = true osd debug op order = true osd debug verify stray on activate = true osd debug pg log writeout = true osd debug verify cached snaps = true osd debug verify missing on start = true osd debug misdirected ops = true osd op queue = debug_random osd op queue cut off = debug_random osd shutdown pgref assert = true bdev debug aio = true osd sloppy crc = true debug ms = 1 debug osd = 20 osd mclock iops capacity threshold hdd = 49000 [mgr] mon reweight min pgs per osd = 4 mon reweight min bytes per osd = 10 mgr/telemetry/nag = false debug mgr = 20 debug ms = 1 mgr/cephadm/use_agent = False [mon] mon data avail warn = 5 mon mgr mkfs grace = 240 mon reweight min pgs per osd = 4 mon osd reporter subtree level = osd mon osd prime pg temp = true mon reweight min bytes per osd = 10 # rotate auth tickets quickly to exercise renewal paths auth mon ticket ttl = 660# 11m auth service ticket ttl = 240# 4m # don't complain about global id reclaim mon_warn_on_insecure_global_id_reclaim = false mon_warn_on_insecure_global_id_reclaim_allowed = false debug mon = 20 debug ms = 1 debug paxos = 20 [client.rgw] rgw cache enabled = true rgw enable ops log = true rgw enable usage log = true 2026-03-09T20:18:28.335 DEBUG:teuthology.orchestra.run.vm03:mon.a> sudo journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.a.service 2026-03-09T20:18:28.377 DEBUG:teuthology.orchestra.run.vm03:mgr.a> sudo journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mgr.a.service 2026-03-09T20:18:28.421 INFO:tasks.cephadm:Bootstrapping... 2026-03-09T20:18:28.421 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df -v bootstrap --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d --config /home/ubuntu/cephtest/seed.ceph.conf --output-config /etc/ceph/ceph.conf --output-keyring /etc/ceph/ceph.client.admin.keyring --output-pub-ssh-key /home/ubuntu/cephtest/ceph.pub --mon-id a --mgr-id a --orphan-initial-daemons --skip-monitoring-stack --mon-ip 192.168.123.103 --skip-admin-label && sudo chmod +r /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:18:28.553 INFO:teuthology.orchestra.run.vm03.stdout:-------------------------------------------------------------------------------- 2026-03-09T20:18:28.553 INFO:teuthology.orchestra.run.vm03.stdout:cephadm ['--image', 'quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df', '-v', 'bootstrap', '--fsid', 'f72c9476-1bf4-11f1-9f3a-7162c3a72a6d', '--config', '/home/ubuntu/cephtest/seed.ceph.conf', '--output-config', '/etc/ceph/ceph.conf', '--output-keyring', '/etc/ceph/ceph.client.admin.keyring', '--output-pub-ssh-key', '/home/ubuntu/cephtest/ceph.pub', '--mon-id', 'a', '--mgr-id', 'a', '--orphan-initial-daemons', '--skip-monitoring-stack', '--mon-ip', '192.168.123.103', '--skip-admin-label'] 2026-03-09T20:18:28.553 INFO:teuthology.orchestra.run.vm03.stderr:Specifying an fsid for your cluster offers no advantages and may increase the likelihood of fsid conflicts. 2026-03-09T20:18:28.553 INFO:teuthology.orchestra.run.vm03.stdout:Verifying podman|docker is present... 2026-03-09T20:18:28.553 INFO:teuthology.orchestra.run.vm03.stdout:Verifying lvm2 is present... 2026-03-09T20:18:28.553 INFO:teuthology.orchestra.run.vm03.stdout:Verifying time synchronization is in place... 2026-03-09T20:18:28.557 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T20:18:28.557 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T20:18:28.559 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T20:18:28.559 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T20:18:28.561 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-09T20:18:28.561 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T20:18:28.563 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-09T20:18:28.563 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T20:18:28.565 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-09T20:18:28.565 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout masked 2026-03-09T20:18:28.567 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-09T20:18:28.567 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T20:18:28.569 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-09T20:18:28.569 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T20:18:28.571 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-09T20:18:28.572 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T20:18:28.574 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout enabled 2026-03-09T20:18:28.576 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout active 2026-03-09T20:18:28.576 INFO:teuthology.orchestra.run.vm03.stdout:Unit ntp.service is enabled and running 2026-03-09T20:18:28.576 INFO:teuthology.orchestra.run.vm03.stdout:Repeating the final host check... 2026-03-09T20:18:28.576 INFO:teuthology.orchestra.run.vm03.stdout:docker (/usr/bin/docker) is present 2026-03-09T20:18:28.576 INFO:teuthology.orchestra.run.vm03.stdout:systemctl is present 2026-03-09T20:18:28.576 INFO:teuthology.orchestra.run.vm03.stdout:lvcreate is present 2026-03-09T20:18:28.578 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chrony.service 2026-03-09T20:18:28.578 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chrony.service: No such file or directory 2026-03-09T20:18:28.580 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chrony.service 2026-03-09T20:18:28.580 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T20:18:28.582 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled chronyd.service 2026-03-09T20:18:28.582 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for chronyd.service: No such file or directory 2026-03-09T20:18:28.584 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active chronyd.service 2026-03-09T20:18:28.584 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T20:18:28.587 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled systemd-timesyncd.service 2026-03-09T20:18:28.587 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout masked 2026-03-09T20:18:28.589 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active systemd-timesyncd.service 2026-03-09T20:18:28.589 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T20:18:28.591 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl is-enabled ntpd.service 2026-03-09T20:18:28.591 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to get unit file state for ntpd.service: No such file or directory 2026-03-09T20:18:28.593 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 3 from systemctl is-active ntpd.service 2026-03-09T20:18:28.593 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout inactive 2026-03-09T20:18:28.596 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout enabled 2026-03-09T20:18:28.598 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stdout active 2026-03-09T20:18:28.598 INFO:teuthology.orchestra.run.vm03.stdout:Unit ntp.service is enabled and running 2026-03-09T20:18:28.598 INFO:teuthology.orchestra.run.vm03.stdout:Host looks OK 2026-03-09T20:18:28.598 INFO:teuthology.orchestra.run.vm03.stdout:Cluster fsid: f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:28.598 INFO:teuthology.orchestra.run.vm03.stdout:Acquiring lock 140193431800368 on /run/cephadm/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d.lock 2026-03-09T20:18:28.598 INFO:teuthology.orchestra.run.vm03.stdout:Lock 140193431800368 acquired on /run/cephadm/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d.lock 2026-03-09T20:18:28.599 INFO:teuthology.orchestra.run.vm03.stdout:Verifying IP 192.168.123.103 port 3300 ... 2026-03-09T20:18:28.599 INFO:teuthology.orchestra.run.vm03.stdout:Verifying IP 192.168.123.103 port 6789 ... 2026-03-09T20:18:28.599 INFO:teuthology.orchestra.run.vm03.stdout:Base mon IP(s) is [192.168.123.103:3300, 192.168.123.103:6789], mon addrv is [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-09T20:18:28.600 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout default via 192.168.123.1 dev ens3 proto dhcp src 192.168.123.103 metric 100 2026-03-09T20:18:28.600 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 2026-03-09T20:18:28.600 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 192.168.123.0/24 dev ens3 proto kernel scope link src 192.168.123.103 metric 100 2026-03-09T20:18:28.600 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 192.168.123.1 dev ens3 proto dhcp scope link src 192.168.123.103 metric 100 2026-03-09T20:18:28.601 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout ::1 dev lo proto kernel metric 256 pref medium 2026-03-09T20:18:28.601 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout fe80::/64 dev ens3 proto kernel metric 256 pref medium 2026-03-09T20:18:28.602 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 1: lo: mtu 65536 state UNKNOWN qlen 1000 2026-03-09T20:18:28.602 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout inet6 ::1/128 scope host 2026-03-09T20:18:28.602 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T20:18:28.602 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout 2: ens3: mtu 1500 state UP qlen 1000 2026-03-09T20:18:28.602 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout inet6 fe80::5055:ff:fe00:3/64 scope link 2026-03-09T20:18:28.602 INFO:teuthology.orchestra.run.vm03.stdout:/usr/sbin/ip: stdout valid_lft forever preferred_lft forever 2026-03-09T20:18:28.602 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.0/24` 2026-03-09T20:18:28.603 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.0/24` 2026-03-09T20:18:28.603 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.1/32` 2026-03-09T20:18:28.603 INFO:teuthology.orchestra.run.vm03.stdout:Mon IP `192.168.123.103` is in CIDR network `192.168.123.1/32` 2026-03-09T20:18:28.603 INFO:teuthology.orchestra.run.vm03.stdout:Inferred mon public CIDR from local network configuration ['192.168.123.0/24', '192.168.123.0/24', '192.168.123.1/32', '192.168.123.1/32'] 2026-03-09T20:18:28.603 INFO:teuthology.orchestra.run.vm03.stdout:Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network 2026-03-09T20:18:28.603 INFO:teuthology.orchestra.run.vm03.stdout:Pulling container image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df... 2026-03-09T20:18:29.646 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/docker: stdout e911bdebe5c8faa3800735d1568fcdca65db60df: Pulling from ceph-ci/ceph 2026-03-09T20:18:29.646 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/docker: stdout Digest: sha256:8fda260ab1d2d3118a1622f7df75f44f285dfe74e71793626152a711c12bf2cc 2026-03-09T20:18:29.646 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/docker: stdout Status: Image is up to date for quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:18:29.646 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/docker: stdout quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df 2026-03-09T20:18:29.794 INFO:teuthology.orchestra.run.vm03.stdout:ceph: stdout ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T20:18:29.794 INFO:teuthology.orchestra.run.vm03.stdout:Ceph version: ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable) 2026-03-09T20:18:29.794 INFO:teuthology.orchestra.run.vm03.stdout:Extracting ceph user uid/gid from container image... 2026-03-09T20:18:29.895 INFO:teuthology.orchestra.run.vm03.stdout:stat: stdout 167 167 2026-03-09T20:18:29.896 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial keys... 2026-03-09T20:18:30.019 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQAVK69pmOAVOxAAHRinpj6ClGvApP4rQYieYQ== 2026-03-09T20:18:30.114 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQAWK69pdU9IBRAAHoullh6A1bNTGEbkokq35w== 2026-03-09T20:18:30.206 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-authtool: stdout AQAWK69per8OCxAA8A22iXKxCZTaybkkb/hYIw== 2026-03-09T20:18:30.207 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial monmap... 2026-03-09T20:18:30.297 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T20:18:30.297 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout setting min_mon_release = quincy 2026-03-09T20:18:30.297 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: set fsid to f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:30.297 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: stdout /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T20:18:30.297 INFO:teuthology.orchestra.run.vm03.stdout:monmaptool for a [v2:192.168.123.103:3300,v1:192.168.123.103:6789] on /usr/bin/monmaptool: monmap file /tmp/monmap 2026-03-09T20:18:30.297 INFO:teuthology.orchestra.run.vm03.stdout:setting min_mon_release = quincy 2026-03-09T20:18:30.297 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: set fsid to f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:30.297 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors) 2026-03-09T20:18:30.297 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:18:30.298 INFO:teuthology.orchestra.run.vm03.stdout:Creating mon... 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.379+0000 7ff0f8e4fd80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.379+0000 7ff0f8e4fd80 1 imported monmap: 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr epoch 0 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr last_changed 2026-03-09T20:18:30.276494+0000 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr min_mon_release 17 (quincy) 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr election_strategy: 1 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.379+0000 7ff0f8e4fd80 0 /usr/bin/ceph-mon: set fsid to f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Git sha 0 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: DB SUMMARY 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: DB Session ID: 1V7GMH09L06XY2FUSIEQ 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files: 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.create_if_missing: 1 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.env: 0x55791eabcdc0 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.info_log: 0x557953334da0 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.statistics: (nil) 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.use_fsync: 0 2026-03-09T20:18:30.455 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.db_log_dir: 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.wal_dir: 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.write_buffer_manager: 0x55795332b5e0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.unordered_write: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.row_cache: None 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.wal_filter: None 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.wal_compression: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T20:18:30.456 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.max_open_files: -1 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Compression algorithms supported: 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: kZSTD supported: 0 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: [db/db_impl/db_impl_open.cc:317] Creating manifest 1 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.383+0000 7ff0f8e4fd80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.merge_operator: 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compaction_filter: None 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557953327520) 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks: 1 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr pin_top_level_index_and_filter: 1 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr index_type: 0 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr data_block_index_type: 0 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr index_shortening: 1 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr data_block_hash_table_util_ratio: 0.750000 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr checksum: 4 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr no_block_cache: 0 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_cache: 0x55795334d350 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_cache_name: BinnedLRUCache 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_cache_options: 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr capacity : 536870912 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr num_shard_bits : 4 2026-03-09T20:18:30.457 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr strict_capacity_limit : 0 2026-03-09T20:18:30.460 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr high_pri_pool_ratio: 0.000 2026-03-09T20:18:30.460 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_cache_compressed: (nil) 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr persistent_cache: (nil) 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_size: 4096 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_size_deviation: 10 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_restart_interval: 16 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr index_block_restart_interval: 1 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr metadata_block_size: 4096 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr partition_filters: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr use_delta_encoding: 1 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr filter_policy: bloomfilter 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr whole_key_filtering: 1 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr verify_compression: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr read_amp_bytes_per_bit: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr format_version: 5 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr enable_index_compression: 1 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr block_align: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr max_auto_readahead_size: 262144 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr prepopulate_block_cache: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr initial_auto_readahead_size: 8192 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr num_file_reads_for_auto_readahead: 2 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compression: NoCompression 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.num_levels: 7 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T20:18:30.461 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.ttl: 2592000 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1c2ce899-d713-4207-b01f-de1df9cce968 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 5 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55795334ee00 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.387+0000 7ff0f8e4fd80 4 rocksdb: DB pointer 0x557953432000 2026-03-09T20:18:30.462 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.391+0000 7ff0f05d9640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.391+0000 7ff0f05d9640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ** DB Stats ** 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Sum 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.00 0.00 0 0.000 0 0 0.0 0.0 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ** Compaction Stats [default] ** 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr AddFile(Total Files): cumulative 0, interval 0 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr AddFile(Keys): cumulative 0, interval 0 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Block cache BinnedLRUCache@0x55795334d350#7 capacity: 512.00 MB usage: 0.00 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.4e-05 secs_since: 0 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr Block cache entry stats(count,size,portion): Misc(1,0.00 KB,0%) 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr ** File Read Latency Histogram By Level [default] ** 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.391+0000 7ff0f8e4fd80 4 rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.391+0000 7ff0f8e4fd80 4 rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph-mon: stderr debug 2026-03-09T20:18:30.391+0000 7ff0f8e4fd80 0 /usr/bin/ceph-mon: created monfs at /var/lib/ceph/mon/ceph-a for mon.a 2026-03-09T20:18:30.463 INFO:teuthology.orchestra.run.vm03.stdout:create mon.a on 2026-03-09T20:18:30.658 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Removed /etc/systemd/system/multi-user.target.wants/ceph.target. 2026-03-09T20:18:30.815 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph.target → /etc/systemd/system/ceph.target. 2026-03-09T20:18:30.998 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/multi-user.target.wants/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d.target → /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d.target. 2026-03-09T20:18:30.998 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph.target.wants/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d.target → /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d.target. 2026-03-09T20:18:31.178 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.a 2026-03-09T20:18:31.178 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to reset failed state of unit ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.a.service: Unit ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.a.service not loaded. 2026-03-09T20:18:31.360 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d.target.wants/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.a.service → /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service. 2026-03-09T20:18:31.368 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T20:18:31.368 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T20:18:31.369 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mon to start... 2026-03-09T20:18:31.369 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mon... 2026-03-09T20:18:31.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:31 vm03 bash[20232]: cluster 2026-03-09T20:18:31.504964+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout cluster: 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout id: f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout health: HEALTH_OK 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout services: 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon: 1 daemons, quorum a (age 0.249861s) 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr: no daemons active 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd: 0 osds: 0 up, 0 in 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout data: 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout pools: 0 pools, 0 pgs 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout objects: 0 objects, 0 B 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout usage: 0 B used, 0 B / 0 B avail 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout pgs: 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:mon is available 2026-03-09T20:18:31.797 INFO:teuthology.orchestra.run.vm03.stdout:Assimilating anything we can from ceph.conf... 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout fsid = f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/cephadm/use_agent = False 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T20:18:31.974 INFO:teuthology.orchestra.run.vm03.stdout:Generating new minimal ceph.conf... 2026-03-09T20:18:32.137 INFO:teuthology.orchestra.run.vm03.stdout:Restarting the monitor... 2026-03-09T20:18:32.346 INFO:teuthology.orchestra.run.vm03.stdout:Setting public_network to 192.168.123.1/32,192.168.123.0/24 in mon config section 2026-03-09T20:18:32.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 systemd[1]: Stopping Ceph mon.a for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d... 2026-03-09T20:18:32.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20232]: debug 2026-03-09T20:18:32.171+0000 7fb1882b7640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T20:18:32.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20232]: debug 2026-03-09T20:18:32.171+0000 7fb1882b7640 -1 mon.a@0(leader) e1 *** Got Signal Terminated *** 2026-03-09T20:18:32.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20614]: ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d-mon-a 2026-03-09T20:18:32.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 systemd[1]: ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.a.service: Deactivated successfully. 2026-03-09T20:18:32.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 systemd[1]: Stopped Ceph mon.a for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d. 2026-03-09T20:18:32.385 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 systemd[1]: Started Ceph mon.a for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d. 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 0 pidfile_write: ignore empty --pid-file 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 0 load: jerasure load: lrc 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Git sha 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: DB SUMMARY 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: DB Session ID: 9XZBZDFG5LDW5TVAU7R3 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: MANIFEST file: MANIFEST-000010 size: 179 Bytes 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 1, files: 000008.sst 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000009.log size: 75507 ; 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.env: 0x5605b584adc0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.info_log: 0x5605e9a10700 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.statistics: (nil) 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.use_fsync: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.db_log_dir: 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.wal_dir: 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.write_buffer_manager: 0x5605e9a15900 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T20:18:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T20:18:32.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T20:18:32.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T20:18:32.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T20:18:32.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.unordered_write: 0 2026-03-09T20:18:32.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T20:18:32.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T20:18:32.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T20:18:32.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T20:18:32.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.row_cache: None 2026-03-09T20:18:32.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.wal_filter: None 2026-03-09T20:18:32.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T20:18:32.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T20:18:32.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T20:18:32.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.wal_compression: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_open_files: -1 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Compression algorithms supported: 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: kZSTD supported: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.merge_operator: 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compaction_filter: None 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5605e9a10640) 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cache_index_and_filter_blocks: 1 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: pin_top_level_index_and_filter: 1 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: index_type: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: data_block_index_type: 0 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: index_shortening: 1 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T20:18:32.659 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: checksum: 4 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: no_block_cache: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: block_cache: 0x5605e9a37350 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: block_cache_name: BinnedLRUCache 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: block_cache_options: 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: capacity : 536870912 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: num_shard_bits : 4 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: strict_capacity_limit : 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: high_pri_pool_ratio: 0.000 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: block_cache_compressed: (nil) 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: persistent_cache: (nil) 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: block_size: 4096 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: block_size_deviation: 10 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: block_restart_interval: 16 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: index_block_restart_interval: 1 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: metadata_block_size: 4096 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: partition_filters: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: use_delta_encoding: 1 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: filter_policy: bloomfilter 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: whole_key_filtering: 1 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: verify_compression: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: read_amp_bytes_per_bit: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: format_version: 5 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: enable_index_compression: 1 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: block_align: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: max_auto_readahead_size: 262144 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: prepopulate_block_cache: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: initial_auto_readahead_size: 8192 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: num_file_reads_for_auto_readahead: 2 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compression: NoCompression 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.num_levels: 7 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T20:18:32.660 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.ttl: 2592000 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.443+0000 7fc6ccd83d80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.471+0000 7fc6ccd83d80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.471+0000 7fc6ccd83d80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5 2026-03-09T20:18:32.661 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.471+0000 7fc6ccd83d80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1c2ce899-d713-4207-b01f-de1df9cce968 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.475+0000 7fc6ccd83d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773087512477587, "job": 1, "event": "recovery_started", "wal_files": [9]} 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.475+0000 7fc6ccd83d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.475+0000 7fc6ccd83d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773087512479504, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 72588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 225, "table_properties": {"data_size": 70867, "index_size": 174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 517, "raw_key_size": 9705, "raw_average_key_size": 49, "raw_value_size": 65346, "raw_average_value_size": 333, "num_data_blocks": 8, "num_entries": 196, "num_filter_entries": 196, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773087512, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1c2ce899-d713-4207-b01f-de1df9cce968", "db_session_id": "9XZBZDFG5LDW5TVAU7R3", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}} 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.475+0000 7fc6ccd83d80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773087512479566, "job": 1, "event": "recovery_finished"} 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.475+0000 7fc6ccd83d80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 15 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5605e9a38e00 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 4 rocksdb: DB pointer 0x5605e9b4e000 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6c2b4d640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6c2b4d640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: ** DB Stats ** 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: ** Compaction Stats [default] ** 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: L0 2/0 72.74 KB 0.5 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 40.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Sum 2/0 72.74 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 40.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 40.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: ** Compaction Stats [default] ** 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 40.0 0.00 0.00 1 0.002 0 0 0.0 0.0 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Cumulative compaction: 0.00 GB write, 2.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Interval compaction: 0.00 GB write, 2.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Block cache BinnedLRUCache@0x5605e9a37350#7 capacity: 512.00 MB usage: 1.06 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9e-06 secs_since: 0 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: Block cache entry stats(count,size,portion): FilterBlock(2,0.70 KB,0.00013411%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%) 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 0 starting mon.a rank 0 at public addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] at bind addrs [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 1 mon.a@-1(???) e1 preinit fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 5 mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 5 mon.a@-1(???).mds e0 Unable to load 'last_metadata' 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 0 mon.a@-1(???).mds e1 new map 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 0 mon.a@-1(???).mds e1 print_map 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: e1 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: btime 2026-03-09T20:18:31:513185+0000 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: legacy client fscid: -1 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: No filesystems configured 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 0 mon.a@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 0 mon.a@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 1 mon.a@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 4 mon.a@-1(???).mgr e0 loading version 1 2026-03-09T20:18:32.662 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 4 mon.a@-1(???).mgr e1 active server: (0) 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: debug 2026-03-09T20:18:32.479+0000 7fc6ccd83d80 4 mon.a@-1(???).mgr e1 mkfs or daemon transitioned to available, loading commands 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488494+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488494+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488527+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488527+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488532+0000 mon.a (mon.0) 3 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488532+0000 mon.a (mon.0) 3 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488536+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T20:18:30.276494+0000 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488536+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T20:18:30.276494+0000 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488543+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488543+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488547+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488547+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488550+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488550+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488554+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488554+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488752+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488752+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488767+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.488767+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.489232+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T20:18:32.663 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 bash[20708]: cluster 2026-03-09T20:18:32.489232+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T20:18:32.755 INFO:teuthology.orchestra.run.vm03.stdout:Wrote config to /etc/ceph/ceph.conf 2026-03-09T20:18:32.756 INFO:teuthology.orchestra.run.vm03.stdout:Wrote keyring to /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:18:32.756 INFO:teuthology.orchestra.run.vm03.stdout:Creating mgr... 2026-03-09T20:18:32.756 INFO:teuthology.orchestra.run.vm03.stdout:Verifying port 0.0.0.0:9283 ... 2026-03-09T20:18:32.756 INFO:teuthology.orchestra.run.vm03.stdout:Verifying port 0.0.0.0:8765 ... 2026-03-09T20:18:32.923 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:32 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:18:32.923 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:32 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:18:32.927 INFO:teuthology.orchestra.run.vm03.stdout:Non-zero exit code 1 from systemctl reset-failed ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mgr.a 2026-03-09T20:18:32.927 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Failed to reset failed state of unit ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mgr.a.service: Unit ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mgr.a.service not loaded. 2026-03-09T20:18:33.089 INFO:teuthology.orchestra.run.vm03.stdout:systemctl: stderr Created symlink /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d.target.wants/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mgr.a.service → /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service. 2026-03-09T20:18:33.096 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T20:18:33.096 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to enable service . firewalld.service is not available 2026-03-09T20:18:33.096 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T20:18:33.096 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to open ports <[9283, 8765]>. firewalld.service is not available 2026-03-09T20:18:33.096 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr to start... 2026-03-09T20:18:33.096 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr... 2026-03-09T20:18:33.181 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:33 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:18:33.181 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:32 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:18:33.181 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:32 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:18:33.181 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:33 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:18:33.181 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:33 vm03 systemd[1]: Started Ceph mgr.a for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d. 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "f72c9476-1bf4-11f1-9f3a-7162c3a72a6d", 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 0, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T20:18:31:513185+0000", 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T20:18:33.317 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T20:18:31.513941+0000", 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:18:33.318 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (1/15)... 2026-03-09T20:18:33.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:33 vm03 bash[20968]: debug 2026-03-09T20:18:33.295+0000 7f84dee53140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T20:18:33.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:33 vm03 bash[20968]: debug 2026-03-09T20:18:33.331+0000 7f84dee53140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T20:18:33.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:33 vm03 bash[20968]: debug 2026-03-09T20:18:33.435+0000 7f84dee53140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T20:18:34.074 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:33 vm03 bash[20708]: audit 2026-03-09T20:18:32.712498+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.103:0/2780831427' entity='client.admin' 2026-03-09T20:18:34.074 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:33 vm03 bash[20708]: audit 2026-03-09T20:18:32.712498+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.103:0/2780831427' entity='client.admin' 2026-03-09T20:18:34.074 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:33 vm03 bash[20708]: audit 2026-03-09T20:18:33.269522+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.103:0/1660484640' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:18:34.074 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:33 vm03 bash[20708]: audit 2026-03-09T20:18:33.269522+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.103:0/1660484640' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:18:34.074 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:33 vm03 bash[20968]: debug 2026-03-09T20:18:33.687+0000 7f84dee53140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T20:18:34.074 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:34 vm03 bash[20968]: debug 2026-03-09T20:18:34.071+0000 7f84dee53140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T20:18:34.381 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:34 vm03 bash[20968]: debug 2026-03-09T20:18:34.143+0000 7f84dee53140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T20:18:34.381 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:34 vm03 bash[20968]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T20:18:34.381 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:34 vm03 bash[20968]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T20:18:34.381 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:34 vm03 bash[20968]: from numpy import show_config as show_numpy_config 2026-03-09T20:18:34.381 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:34 vm03 bash[20968]: debug 2026-03-09T20:18:34.251+0000 7f84dee53140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T20:18:34.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:34 vm03 bash[20968]: debug 2026-03-09T20:18:34.379+0000 7f84dee53140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T20:18:34.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:34 vm03 bash[20968]: debug 2026-03-09T20:18:34.411+0000 7f84dee53140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T20:18:34.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:34 vm03 bash[20968]: debug 2026-03-09T20:18:34.443+0000 7f84dee53140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T20:18:34.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:34 vm03 bash[20968]: debug 2026-03-09T20:18:34.479+0000 7f84dee53140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T20:18:34.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:34 vm03 bash[20968]: debug 2026-03-09T20:18:34.523+0000 7f84dee53140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T20:18:35.196 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:34 vm03 bash[20968]: debug 2026-03-09T20:18:34.915+0000 7f84dee53140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T20:18:35.197 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:34 vm03 bash[20968]: debug 2026-03-09T20:18:34.951+0000 7f84dee53140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T20:18:35.197 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:34 vm03 bash[20968]: debug 2026-03-09T20:18:34.987+0000 7f84dee53140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T20:18:35.197 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:35 vm03 bash[20968]: debug 2026-03-09T20:18:35.119+0000 7f84dee53140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T20:18:35.197 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:35 vm03 bash[20968]: debug 2026-03-09T20:18:35.155+0000 7f84dee53140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T20:18:35.462 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:35 vm03 bash[20968]: debug 2026-03-09T20:18:35.191+0000 7f84dee53140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T20:18:35.462 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:35 vm03 bash[20968]: debug 2026-03-09T20:18:35.291+0000 7f84dee53140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "f72c9476-1bf4-11f1-9f3a-7162c3a72a6d", 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 3, 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T20:18:35.555 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T20:18:31:513185+0000", 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": false, 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T20:18:31.513941+0000", 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:18:35.556 INFO:teuthology.orchestra.run.vm03.stdout:mgr not available, waiting (2/15)... 2026-03-09T20:18:35.714 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:35 vm03 bash[20708]: audit 2026-03-09T20:18:35.511427+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.103:0/2523825932' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:18:35.714 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:35 vm03 bash[20708]: audit 2026-03-09T20:18:35.511427+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.103:0/2523825932' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:18:35.714 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:35 vm03 bash[20968]: debug 2026-03-09T20:18:35.459+0000 7f84dee53140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T20:18:35.714 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:35 vm03 bash[20968]: debug 2026-03-09T20:18:35.639+0000 7f84dee53140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T20:18:35.714 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:35 vm03 bash[20968]: debug 2026-03-09T20:18:35.671+0000 7f84dee53140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T20:18:36.067 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:35 vm03 bash[20968]: debug 2026-03-09T20:18:35.711+0000 7f84dee53140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T20:18:36.067 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:35 vm03 bash[20968]: debug 2026-03-09T20:18:35.851+0000 7f84dee53140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:18:36.329 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20968]: debug 2026-03-09T20:18:36.063+0000 7f84dee53140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T20:18:36.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: cluster 2026-03-09T20:18:36.068742+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: cluster 2026-03-09T20:18:36.068742+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: cluster 2026-03-09T20:18:36.072300+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.0036244s) 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: cluster 2026-03-09T20:18:36.072300+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.0036244s) 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.073529+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.073529+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.073584+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.073584+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.073834+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.073834+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.073893+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.073893+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.073945+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.073945+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.073999+0000 mon.a (mon.0) 22 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.073999+0000 mon.a (mon.0) 22 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.074589+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.074589+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.075114+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.075114+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: cluster 2026-03-09T20:18:36.078597+0000 mon.a (mon.0) 25 : cluster [INF] Manager daemon a is now available 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: cluster 2026-03-09T20:18:36.078597+0000 mon.a (mon.0) 25 : cluster [INF] Manager daemon a is now available 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.086449+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.086449+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.089242+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.089242+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.089450+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.089450+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.091339+0000 mon.a (mon.0) 29 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.091339+0000 mon.a (mon.0) 29 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.093120+0000 mon.a (mon.0) 30 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:18:36.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:36 vm03 bash[20708]: audit 2026-03-09T20:18:36.093120+0000 mon.a (mon.0) 30 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:18:37.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:37.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:18:37.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsid": "f72c9476-1bf4-11f1-9f3a-7162c3a72a6d", 2026-03-09T20:18:37.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "health": { 2026-03-09T20:18:37.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "status": "HEALTH_OK", 2026-03-09T20:18:37.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "checks": {}, 2026-03-09T20:18:37.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mutes": [] 2026-03-09T20:18:37.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:37.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "election_epoch": 5, 2026-03-09T20:18:37.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum": [ 2026-03-09T20:18:37.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 0 2026-03-09T20:18:37.828 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_names": [ 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "a" 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "quorum_age": 5, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "monmap": { 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "min_mon_release_name": "squid", 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_mons": 1 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osdmap": { 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_osds": 0, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_up_osds": 0, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_up_since": 0, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_in_osds": 0, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "osd_in_since": 0, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_remapped_pgs": 0 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgmap": { 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "pgs_by_state": [], 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pgs": 0, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_pools": 0, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_objects": 0, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "data_bytes": 0, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_used": 0, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_avail": 0, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "bytes_total": 0 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "fsmap": { 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "btime": "2026-03-09T20:18:31:513185+0000", 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "by_rank": [], 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "up:standby": 0 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap": { 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standbys": 0, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modules": [ 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "iostat", 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "nfs", 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "restful" 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ], 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "servicemap": { 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 1, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "modified": "2026-03-09T20:18:31.513941+0000", 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "services": {} 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout }, 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "progress_events": {} 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:18:37.829 INFO:teuthology.orchestra.run.vm03.stdout:mgr is available 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [global] 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout fsid = f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_cluster_log_file_level = debug 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_host = [v2:192.168.123.103:3300,v1:192.168.123.103:6789] 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_pg_remap = true 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_osd_allow_primary_affinity = true 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mon_warn_on_no_sortbitwise = false 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_crush_chooseleaf_type = 0 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [mgr] 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout mgr/telemetry/nag = false 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout [osd] 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_map_max_advance = 10 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout osd_sloppy_crc = true 2026-03-09T20:18:38.078 INFO:teuthology.orchestra.run.vm03.stdout:Enabling cephadm module... 2026-03-09T20:18:38.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:38 vm03 bash[20708]: cluster 2026-03-09T20:18:37.076795+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e3: a(active, since 1.00812s) 2026-03-09T20:18:38.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:38 vm03 bash[20708]: cluster 2026-03-09T20:18:37.076795+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e3: a(active, since 1.00812s) 2026-03-09T20:18:38.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:38 vm03 bash[20708]: audit 2026-03-09T20:18:37.794105+0000 mon.a (mon.0) 32 : audit [DBG] from='client.? 192.168.123.103:0/288382585' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:18:38.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:38 vm03 bash[20708]: audit 2026-03-09T20:18:37.794105+0000 mon.a (mon.0) 32 : audit [DBG] from='client.? 192.168.123.103:0/288382585' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:18:38.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:38 vm03 bash[20708]: audit 2026-03-09T20:18:38.040515+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.103:0/1974182671' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T20:18:38.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:38 vm03 bash[20708]: audit 2026-03-09T20:18:38.040515+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.103:0/1974182671' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T20:18:38.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:38 vm03 bash[20708]: audit 2026-03-09T20:18:38.042727+0000 mon.a (mon.0) 34 : audit [INF] from='client.? 192.168.123.103:0/1974182671' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-09T20:18:38.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:38 vm03 bash[20708]: audit 2026-03-09T20:18:38.042727+0000 mon.a (mon.0) 34 : audit [INF] from='client.? 192.168.123.103:0/1974182671' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-09T20:18:39.346 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:39 vm03 bash[20708]: audit 2026-03-09T20:18:38.290171+0000 mon.a (mon.0) 35 : audit [INF] from='client.? 192.168.123.103:0/936723196' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T20:18:39.346 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:39 vm03 bash[20708]: audit 2026-03-09T20:18:38.290171+0000 mon.a (mon.0) 35 : audit [INF] from='client.? 192.168.123.103:0/936723196' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T20:18:39.346 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:39 vm03 bash[20708]: audit 2026-03-09T20:18:39.044376+0000 mon.a (mon.0) 36 : audit [INF] from='client.? 192.168.123.103:0/936723196' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T20:18:39.346 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:39 vm03 bash[20708]: audit 2026-03-09T20:18:39.044376+0000 mon.a (mon.0) 36 : audit [INF] from='client.? 192.168.123.103:0/936723196' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T20:18:39.346 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:39 vm03 bash[20708]: cluster 2026-03-09T20:18:39.046414+0000 mon.a (mon.0) 37 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-09T20:18:39.346 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:39 vm03 bash[20708]: cluster 2026-03-09T20:18:39.046414+0000 mon.a (mon.0) 37 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-09T20:18:39.346 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:39 vm03 bash[20968]: ignoring --setuser ceph since I am not root 2026-03-09T20:18:39.346 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:39 vm03 bash[20968]: ignoring --setgroup ceph since I am not root 2026-03-09T20:18:39.346 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:39 vm03 bash[20968]: debug 2026-03-09T20:18:39.191+0000 7f13480d0140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T20:18:39.346 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:39 vm03 bash[20968]: debug 2026-03-09T20:18:39.227+0000 7f13480d0140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T20:18:39.346 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:39 vm03 bash[20968]: debug 2026-03-09T20:18:39.343+0000 7f13480d0140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T20:18:39.390 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:18:39.390 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 4, 2026-03-09T20:18:39.390 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T20:18:39.390 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-09T20:18:39.390 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T20:18:39.390 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:18:39.390 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for the mgr to restart... 2026-03-09T20:18:39.390 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr epoch 4... 2026-03-09T20:18:39.906 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:39 vm03 bash[20968]: debug 2026-03-09T20:18:39.635+0000 7f13480d0140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T20:18:40.311 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20708]: audit 2026-03-09T20:18:39.350654+0000 mon.a (mon.0) 38 : audit [DBG] from='client.? 192.168.123.103:0/3285276427' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:18:40.311 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20708]: audit 2026-03-09T20:18:39.350654+0000 mon.a (mon.0) 38 : audit [DBG] from='client.? 192.168.123.103:0/3285276427' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:18:40.311 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20968]: debug 2026-03-09T20:18:40.011+0000 7f13480d0140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T20:18:40.311 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20968]: debug 2026-03-09T20:18:40.083+0000 7f13480d0140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T20:18:40.311 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20968]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T20:18:40.311 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20968]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T20:18:40.311 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20968]: from numpy import show_config as show_numpy_config 2026-03-09T20:18:40.311 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20968]: debug 2026-03-09T20:18:40.187+0000 7f13480d0140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T20:18:40.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20968]: debug 2026-03-09T20:18:40.307+0000 7f13480d0140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T20:18:40.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20968]: debug 2026-03-09T20:18:40.339+0000 7f13480d0140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T20:18:40.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20968]: debug 2026-03-09T20:18:40.371+0000 7f13480d0140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T20:18:40.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20968]: debug 2026-03-09T20:18:40.407+0000 7f13480d0140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T20:18:40.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20968]: debug 2026-03-09T20:18:40.451+0000 7f13480d0140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T20:18:41.142 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20968]: debug 2026-03-09T20:18:40.855+0000 7f13480d0140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T20:18:41.142 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20968]: debug 2026-03-09T20:18:40.891+0000 7f13480d0140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T20:18:41.142 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:40 vm03 bash[20968]: debug 2026-03-09T20:18:40.927+0000 7f13480d0140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T20:18:41.142 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:41 vm03 bash[20968]: debug 2026-03-09T20:18:41.059+0000 7f13480d0140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T20:18:41.142 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:41 vm03 bash[20968]: debug 2026-03-09T20:18:41.099+0000 7f13480d0140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T20:18:41.406 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:41 vm03 bash[20968]: debug 2026-03-09T20:18:41.139+0000 7f13480d0140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T20:18:41.406 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:41 vm03 bash[20968]: debug 2026-03-09T20:18:41.239+0000 7f13480d0140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:18:41.406 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:41 vm03 bash[20968]: debug 2026-03-09T20:18:41.383+0000 7f13480d0140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T20:18:41.906 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:41 vm03 bash[20968]: debug 2026-03-09T20:18:41.539+0000 7f13480d0140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T20:18:41.906 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:41 vm03 bash[20968]: debug 2026-03-09T20:18:41.571+0000 7f13480d0140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T20:18:41.906 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:41 vm03 bash[20968]: debug 2026-03-09T20:18:41.611+0000 7f13480d0140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T20:18:41.906 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:41 vm03 bash[20968]: debug 2026-03-09T20:18:41.743+0000 7f13480d0140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: cluster 2026-03-09T20:18:41.960535+0000 mon.a (mon.0) 39 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: cluster 2026-03-09T20:18:41.960535+0000 mon.a (mon.0) 39 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: cluster 2026-03-09T20:18:41.960722+0000 mon.a (mon.0) 40 : cluster [INF] Activating manager daemon a 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: cluster 2026-03-09T20:18:41.960722+0000 mon.a (mon.0) 40 : cluster [INF] Activating manager daemon a 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: cluster 2026-03-09T20:18:41.965139+0000 mon.a (mon.0) 41 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: cluster 2026-03-09T20:18:41.965139+0000 mon.a (mon.0) 41 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: cluster 2026-03-09T20:18:41.965278+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00463008s) 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: cluster 2026-03-09T20:18:41.965278+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00463008s) 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.967601+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.967601+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.967748+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.967748+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.968320+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.968320+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.968438+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.968438+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.968587+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.968587+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: cluster 2026-03-09T20:18:41.972913+0000 mon.a (mon.0) 48 : cluster [INF] Manager daemon a is now available 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: cluster 2026-03-09T20:18:41.972913+0000 mon.a (mon.0) 48 : cluster [INF] Manager daemon a is now available 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.981206+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.981206+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.984297+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.984297+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.994163+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.994163+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.994435+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.994435+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.995360+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:41.995360+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:42.002675+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:42 vm03 bash[20708]: audit 2026-03-09T20:18:42.002675+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:18:42.407 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:41 vm03 bash[20968]: debug 2026-03-09T20:18:41.955+0000 7f13480d0140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T20:18:43.010 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:18:43.010 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 6, 2026-03-09T20:18:43.010 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T20:18:43.010 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:18:43.010 INFO:teuthology.orchestra.run.vm03.stdout:mgr epoch 4 is available 2026-03-09T20:18:43.010 INFO:teuthology.orchestra.run.vm03.stdout:Setting orchestrator backend to cephadm... 2026-03-09T20:18:43.263 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20708]: cephadm 2026-03-09T20:18:41.978536+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T20:18:43.264 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20708]: cephadm 2026-03-09T20:18:41.978536+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T20:18:43.264 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20708]: cluster 2026-03-09T20:18:42.969217+0000 mon.a (mon.0) 55 : cluster [DBG] mgrmap e6: a(active, since 1.00857s) 2026-03-09T20:18:43.264 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20708]: cluster 2026-03-09T20:18:42.969217+0000 mon.a (mon.0) 55 : cluster [DBG] mgrmap e6: a(active, since 1.00857s) 2026-03-09T20:18:43.573 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout value unchanged 2026-03-09T20:18:43.573 INFO:teuthology.orchestra.run.vm03.stdout:Generating ssh key... 2026-03-09T20:18:44.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:42.970034+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:18:44.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:42.970034+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:18:44.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:42.974105+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:42.974105+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:43.245897+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:43.245897+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:43.255451+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:43.255451+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:43.400912+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:43.400912+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:43.403394+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:43.403394+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:43.859745+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:43.859745+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:43.862749+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:44 vm03 bash[20708]: audit 2026-03-09T20:18:43.862749+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: Generating public/private ed25519 key pair. 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: Your identification has been saved in /tmp/tmpcz4v1zw0/key 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: Your public key has been saved in /tmp/tmpcz4v1zw0/key.pub 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: The key fingerprint is: 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: SHA256:YZBGsn+wbAX7ar4uigvRRR9RdY0Q0DkCfLvzM5M+XQs ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: The key's randomart image is: 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: +--[ED25519 256]--+ 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: | ooB=o+++.o | 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: | . ++=o +.. . | 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: | o.+.+o . | 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: | . . o *.. | 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: |. . = S. | 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: | . . oo E . | 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: |. o o o o . | 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: |.. .o B . . | 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: |o... o+. ..= | 2026-03-09T20:18:44.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:43 vm03 bash[20968]: +----[SHA256]-----+ 2026-03-09T20:18:44.231 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKYOtP+CCoJj5PG7okNI28jvvvgLejhXEDFiZz8gIZsN ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:44.231 INFO:teuthology.orchestra.run.vm03.stdout:Wrote public SSH key to /home/ubuntu/cephtest/ceph.pub 2026-03-09T20:18:44.231 INFO:teuthology.orchestra.run.vm03.stdout:Adding key to root@localhost authorized_keys... 2026-03-09T20:18:44.231 INFO:teuthology.orchestra.run.vm03.stdout:Adding host vm03... 2026-03-09T20:18:45.025 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:45 vm03 bash[20708]: audit 2026-03-09T20:18:43.241366+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:45.025 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:45 vm03 bash[20708]: audit 2026-03-09T20:18:43.241366+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:45.025 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:45 vm03 bash[20708]: audit 2026-03-09T20:18:43.533912+0000 mgr.a (mgr.14118) 5 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:45.025 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:45 vm03 bash[20708]: audit 2026-03-09T20:18:43.533912+0000 mgr.a (mgr.14118) 5 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:45.026 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:45 vm03 bash[20708]: audit 2026-03-09T20:18:43.842921+0000 mgr.a (mgr.14118) 6 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:45.026 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:45 vm03 bash[20708]: audit 2026-03-09T20:18:43.842921+0000 mgr.a (mgr.14118) 6 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:45.026 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:45 vm03 bash[20708]: cephadm 2026-03-09T20:18:43.843137+0000 mgr.a (mgr.14118) 7 : cephadm [INF] Generating ssh key... 2026-03-09T20:18:45.026 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:45 vm03 bash[20708]: cephadm 2026-03-09T20:18:43.843137+0000 mgr.a (mgr.14118) 7 : cephadm [INF] Generating ssh key... 2026-03-09T20:18:45.026 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:45 vm03 bash[20708]: audit 2026-03-09T20:18:44.420368+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:45.026 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:45 vm03 bash[20708]: audit 2026-03-09T20:18:44.420368+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:45.026 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:45 vm03 bash[20708]: cluster 2026-03-09T20:18:44.867659+0000 mon.a (mon.0) 63 : cluster [DBG] mgrmap e7: a(active, since 2s) 2026-03-09T20:18:45.026 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:45 vm03 bash[20708]: cluster 2026-03-09T20:18:44.867659+0000 mon.a (mon.0) 63 : cluster [DBG] mgrmap e7: a(active, since 2s) 2026-03-09T20:18:46.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: audit 2026-03-09T20:18:44.184615+0000 mgr.a (mgr.14118) 8 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:46.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: audit 2026-03-09T20:18:44.184615+0000 mgr.a (mgr.14118) 8 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:46.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: cephadm 2026-03-09T20:18:44.210040+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Bus STARTING 2026-03-09T20:18:46.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: cephadm 2026-03-09T20:18:44.210040+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Bus STARTING 2026-03-09T20:18:46.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: cephadm 2026-03-09T20:18:44.311449+0000 mgr.a (mgr.14118) 10 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:18:46.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: cephadm 2026-03-09T20:18:44.311449+0000 mgr.a (mgr.14118) 10 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:18:46.270 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: cephadm 2026-03-09T20:18:44.419682+0000 mgr.a (mgr.14118) 11 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:18:46.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: cephadm 2026-03-09T20:18:44.419682+0000 mgr.a (mgr.14118) 11 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:18:46.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: cephadm 2026-03-09T20:18:44.419729+0000 mgr.a (mgr.14118) 12 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Bus STARTED 2026-03-09T20:18:46.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: cephadm 2026-03-09T20:18:44.419729+0000 mgr.a (mgr.14118) 12 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Bus STARTED 2026-03-09T20:18:46.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: cephadm 2026-03-09T20:18:44.420341+0000 mgr.a (mgr.14118) 13 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Client ('192.168.123.103', 59018) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:18:46.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: cephadm 2026-03-09T20:18:44.420341+0000 mgr.a (mgr.14118) 13 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Client ('192.168.123.103', 59018) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:18:46.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: audit 2026-03-09T20:18:44.449710+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:46.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: audit 2026-03-09T20:18:44.449710+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:46.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: cephadm 2026-03-09T20:18:44.990494+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-09T20:18:46.271 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:46 vm03 bash[20708]: cephadm 2026-03-09T20:18:44.990494+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-09T20:18:46.283 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Added host 'vm03' with addr '192.168.123.103' 2026-03-09T20:18:46.283 INFO:teuthology.orchestra.run.vm03.stdout:Deploying unmanaged mon service... 2026-03-09T20:18:46.564 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Scheduled mon update... 2026-03-09T20:18:46.564 INFO:teuthology.orchestra.run.vm03.stdout:Deploying unmanaged mgr service... 2026-03-09T20:18:46.803 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Scheduled mgr update... 2026-03-09T20:18:47.292 INFO:teuthology.orchestra.run.vm03.stdout:Enabling the dashboard module... 2026-03-09T20:18:47.336 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: audit 2026-03-09T20:18:46.234344+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:47.336 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: audit 2026-03-09T20:18:46.234344+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:47.336 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: cephadm 2026-03-09T20:18:46.234694+0000 mgr.a (mgr.14118) 16 : cephadm [INF] Added host vm03 2026-03-09T20:18:47.336 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: cephadm 2026-03-09T20:18:46.234694+0000 mgr.a (mgr.14118) 16 : cephadm [INF] Added host vm03 2026-03-09T20:18:47.336 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: audit 2026-03-09T20:18:46.235661+0000 mon.a (mon.0) 65 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: audit 2026-03-09T20:18:46.235661+0000 mon.a (mon.0) 65 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: audit 2026-03-09T20:18:46.521993+0000 mgr.a (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: audit 2026-03-09T20:18:46.521993+0000 mgr.a (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: cephadm 2026-03-09T20:18:46.522859+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: cephadm 2026-03-09T20:18:46.522859+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: audit 2026-03-09T20:18:46.527027+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: audit 2026-03-09T20:18:46.527027+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: audit 2026-03-09T20:18:46.768856+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: audit 2026-03-09T20:18:46.768856+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: cephadm 2026-03-09T20:18:46.769511+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: cephadm 2026-03-09T20:18:46.769511+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: audit 2026-03-09T20:18:46.771854+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: audit 2026-03-09T20:18:46.771854+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: audit 2026-03-09T20:18:47.009342+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.103:0/776762689' entity='client.admin' 2026-03-09T20:18:47.337 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:47 vm03 bash[20708]: audit 2026-03-09T20:18:47.009342+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.103:0/776762689' entity='client.admin' 2026-03-09T20:18:48.560 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:48 vm03 bash[20968]: ignoring --setuser ceph since I am not root 2026-03-09T20:18:48.560 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:48 vm03 bash[20968]: ignoring --setgroup ceph since I am not root 2026-03-09T20:18:48.560 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:48 vm03 bash[20968]: debug 2026-03-09T20:18:48.411+0000 7f4bfb5f5140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T20:18:48.560 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:48 vm03 bash[20968]: debug 2026-03-09T20:18:48.447+0000 7f4bfb5f5140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T20:18:48.560 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:48 vm03 bash[20708]: audit 2026-03-09T20:18:47.250597+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.103:0/1643594369' entity='client.admin' 2026-03-09T20:18:48.560 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:48 vm03 bash[20708]: audit 2026-03-09T20:18:47.250597+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.103:0/1643594369' entity='client.admin' 2026-03-09T20:18:48.560 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:48 vm03 bash[20708]: audit 2026-03-09T20:18:47.532930+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.103:0/2743274158' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T20:18:48.560 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:48 vm03 bash[20708]: audit 2026-03-09T20:18:47.532930+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.103:0/2743274158' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T20:18:48.560 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:48 vm03 bash[20708]: audit 2026-03-09T20:18:47.657547+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:48.560 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:48 vm03 bash[20708]: audit 2026-03-09T20:18:47.657547+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:48.560 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:48 vm03 bash[20708]: audit 2026-03-09T20:18:47.918753+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:48.560 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:48 vm03 bash[20708]: audit 2026-03-09T20:18:47.918753+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:18:48.660 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:18:48.660 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "epoch": 8, 2026-03-09T20:18:48.660 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "available": true, 2026-03-09T20:18:48.660 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "active_name": "a", 2026-03-09T20:18:48.660 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "num_standby": 0 2026-03-09T20:18:48.660 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:18:48.660 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for the mgr to restart... 2026-03-09T20:18:48.660 INFO:teuthology.orchestra.run.vm03.stdout:Waiting for mgr epoch 8... 2026-03-09T20:18:48.846 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:48 vm03 bash[20968]: debug 2026-03-09T20:18:48.555+0000 7f4bfb5f5140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T20:18:49.156 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:48 vm03 bash[20968]: debug 2026-03-09T20:18:48.843+0000 7f4bfb5f5140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T20:18:49.546 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20708]: audit 2026-03-09T20:18:48.255996+0000 mon.a (mon.0) 73 : audit [INF] from='client.? 192.168.123.103:0/2743274158' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T20:18:49.546 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20708]: audit 2026-03-09T20:18:48.255996+0000 mon.a (mon.0) 73 : audit [INF] from='client.? 192.168.123.103:0/2743274158' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T20:18:49.546 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20708]: cluster 2026-03-09T20:18:48.259467+0000 mon.a (mon.0) 74 : cluster [DBG] mgrmap e8: a(active, since 6s) 2026-03-09T20:18:49.546 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20708]: cluster 2026-03-09T20:18:48.259467+0000 mon.a (mon.0) 74 : cluster [DBG] mgrmap e8: a(active, since 6s) 2026-03-09T20:18:49.546 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20708]: audit 2026-03-09T20:18:48.562228+0000 mon.a (mon.0) 75 : audit [DBG] from='client.? 192.168.123.103:0/446170918' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:18:49.546 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20708]: audit 2026-03-09T20:18:48.562228+0000 mon.a (mon.0) 75 : audit [DBG] from='client.? 192.168.123.103:0/446170918' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:18:49.546 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20968]: debug 2026-03-09T20:18:49.239+0000 7f4bfb5f5140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T20:18:49.546 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20968]: debug 2026-03-09T20:18:49.315+0000 7f4bfb5f5140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T20:18:49.546 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20968]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T20:18:49.546 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20968]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T20:18:49.546 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20968]: from numpy import show_config as show_numpy_config 2026-03-09T20:18:49.546 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20968]: debug 2026-03-09T20:18:49.419+0000 7f4bfb5f5140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T20:18:49.906 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20968]: debug 2026-03-09T20:18:49.543+0000 7f4bfb5f5140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T20:18:49.906 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20968]: debug 2026-03-09T20:18:49.575+0000 7f4bfb5f5140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T20:18:49.906 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20968]: debug 2026-03-09T20:18:49.607+0000 7f4bfb5f5140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T20:18:49.906 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20968]: debug 2026-03-09T20:18:49.643+0000 7f4bfb5f5140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T20:18:49.906 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:49 vm03 bash[20968]: debug 2026-03-09T20:18:49.691+0000 7f4bfb5f5140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T20:18:50.379 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:50 vm03 bash[20968]: debug 2026-03-09T20:18:50.095+0000 7f4bfb5f5140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T20:18:50.379 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:50 vm03 bash[20968]: debug 2026-03-09T20:18:50.131+0000 7f4bfb5f5140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T20:18:50.379 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:50 vm03 bash[20968]: debug 2026-03-09T20:18:50.167+0000 7f4bfb5f5140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T20:18:50.379 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:50 vm03 bash[20968]: debug 2026-03-09T20:18:50.299+0000 7f4bfb5f5140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T20:18:50.379 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:50 vm03 bash[20968]: debug 2026-03-09T20:18:50.339+0000 7f4bfb5f5140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T20:18:50.664 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:50 vm03 bash[20968]: debug 2026-03-09T20:18:50.375+0000 7f4bfb5f5140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T20:18:50.664 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:50 vm03 bash[20968]: debug 2026-03-09T20:18:50.479+0000 7f4bfb5f5140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:18:50.664 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:50 vm03 bash[20968]: debug 2026-03-09T20:18:50.619+0000 7f4bfb5f5140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T20:18:51.156 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:50 vm03 bash[20968]: debug 2026-03-09T20:18:50.779+0000 7f4bfb5f5140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T20:18:51.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:50 vm03 bash[20968]: debug 2026-03-09T20:18:50.811+0000 7f4bfb5f5140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T20:18:51.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:50 vm03 bash[20968]: debug 2026-03-09T20:18:50.847+0000 7f4bfb5f5140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T20:18:51.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:50 vm03 bash[20968]: debug 2026-03-09T20:18:50.983+0000 7f4bfb5f5140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:18:51.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20968]: debug 2026-03-09T20:18:51.187+0000 7f4bfb5f5140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T20:18:51.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: cluster 2026-03-09T20:18:51.194813+0000 mon.a (mon.0) 76 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:18:51.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: cluster 2026-03-09T20:18:51.194813+0000 mon.a (mon.0) 76 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:18:51.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: cluster 2026-03-09T20:18:51.195258+0000 mon.a (mon.0) 77 : cluster [INF] Activating manager daemon a 2026-03-09T20:18:51.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: cluster 2026-03-09T20:18:51.195258+0000 mon.a (mon.0) 77 : cluster [INF] Activating manager daemon a 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: cluster 2026-03-09T20:18:51.200372+0000 mon.a (mon.0) 78 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: cluster 2026-03-09T20:18:51.200372+0000 mon.a (mon.0) 78 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: cluster 2026-03-09T20:18:51.200503+0000 mon.a (mon.0) 79 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00534097s) 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: cluster 2026-03-09T20:18:51.200503+0000 mon.a (mon.0) 79 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00534097s) 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.202704+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.202704+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.203030+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.203030+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.203826+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.203826+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.204155+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.204155+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.204470+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.204470+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: cluster 2026-03-09T20:18:51.209854+0000 mon.a (mon.0) 85 : cluster [INF] Manager daemon a is now available 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: cluster 2026-03-09T20:18:51.209854+0000 mon.a (mon.0) 85 : cluster [INF] Manager daemon a is now available 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.226412+0000 mon.a (mon.0) 86 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.226412+0000 mon.a (mon.0) 86 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.241984+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.241984+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.244288+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:18:51.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:51 vm03 bash[20708]: audit 2026-03-09T20:18:51.244288+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:18:52.241 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout { 2026-03-09T20:18:52.241 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "mgrmap_epoch": 10, 2026-03-09T20:18:52.241 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout "initialized": true 2026-03-09T20:18:52.241 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout } 2026-03-09T20:18:52.241 INFO:teuthology.orchestra.run.vm03.stdout:mgr epoch 8 is available 2026-03-09T20:18:52.242 INFO:teuthology.orchestra.run.vm03.stdout:Generating a dashboard self-signed certificate... 2026-03-09T20:18:52.506 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout Self-signed certificate created 2026-03-09T20:18:52.506 INFO:teuthology.orchestra.run.vm03.stdout:Creating initial admin user... 2026-03-09T20:18:52.915 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout {"username": "admin", "password": "$2b$12$Qf6ZiClVZjtgyduzPyp0YuW3SNEaGSwJNh1t2duWJSNXv/s767qsS", "roles": ["administrator"], "name": null, "email": null, "lastUpdate": 1773087532, "enabled": true, "pwdExpirationDate": null, "pwdUpdateRequired": true} 2026-03-09T20:18:52.915 INFO:teuthology.orchestra.run.vm03.stdout:Fetching dashboard port number... 2026-03-09T20:18:53.161 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stdout 8443 2026-03-09T20:18:53.161 INFO:teuthology.orchestra.run.vm03.stdout:firewalld does not appear to be present 2026-03-09T20:18:53.161 INFO:teuthology.orchestra.run.vm03.stdout:Not possible to open ports <[8443]>. firewalld.service is not available 2026-03-09T20:18:53.162 INFO:teuthology.orchestra.run.vm03.stdout:Ceph Dashboard is now available at: 2026-03-09T20:18:53.162 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:18:53.162 INFO:teuthology.orchestra.run.vm03.stdout: URL: https://vm03.local:8443/ 2026-03-09T20:18:53.162 INFO:teuthology.orchestra.run.vm03.stdout: User: admin 2026-03-09T20:18:53.162 INFO:teuthology.orchestra.run.vm03.stdout: Password: 2b1ofcjyf6 2026-03-09T20:18:53.162 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:18:53.162 INFO:teuthology.orchestra.run.vm03.stdout:Saving cluster configuration to /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config directory 2026-03-09T20:18:53.406 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: cephadm 2026-03-09T20:18:52.031305+0000 mgr.a (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Bus STARTING 2026-03-09T20:18:53.406 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: cephadm 2026-03-09T20:18:52.031305+0000 mgr.a (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Bus STARTING 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: cephadm 2026-03-09T20:18:52.139156+0000 mgr.a (mgr.14150) 2 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: cephadm 2026-03-09T20:18:52.139156+0000 mgr.a (mgr.14150) 2 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: cephadm 2026-03-09T20:18:52.139627+0000 mgr.a (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Client ('192.168.123.103', 33438) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: cephadm 2026-03-09T20:18:52.139627+0000 mgr.a (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Client ('192.168.123.103', 33438) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: cluster 2026-03-09T20:18:52.203151+0000 mon.a (mon.0) 89 : cluster [DBG] mgrmap e10: a(active, since 1.00799s) 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: cluster 2026-03-09T20:18:52.203151+0000 mon.a (mon.0) 89 : cluster [DBG] mgrmap e10: a(active, since 1.00799s) 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: audit 2026-03-09T20:18:52.203763+0000 mgr.a (mgr.14150) 4 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: audit 2026-03-09T20:18:52.203763+0000 mgr.a (mgr.14150) 4 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: audit 2026-03-09T20:18:52.207504+0000 mgr.a (mgr.14150) 5 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: audit 2026-03-09T20:18:52.207504+0000 mgr.a (mgr.14150) 5 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: cephadm 2026-03-09T20:18:52.240770+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: cephadm 2026-03-09T20:18:52.240770+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: cephadm 2026-03-09T20:18:52.240821+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Bus STARTED 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: cephadm 2026-03-09T20:18:52.240821+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Bus STARTED 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: audit 2026-03-09T20:18:52.470914+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: audit 2026-03-09T20:18:52.470914+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: audit 2026-03-09T20:18:52.472985+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: audit 2026-03-09T20:18:52.472985+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: audit 2026-03-09T20:18:52.881473+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: audit 2026-03-09T20:18:52.881473+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: audit 2026-03-09T20:18:53.120978+0000 mon.a (mon.0) 93 : audit [DBG] from='client.? 192.168.123.103:0/1001944308' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T20:18:53.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:53 vm03 bash[20708]: audit 2026-03-09T20:18:53.120978+0000 mon.a (mon.0) 93 : audit [DBG] from='client.? 192.168.123.103:0/1001944308' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout:/usr/bin/ceph: stderr set mgr/dashboard/cluster/status 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout:You can access the Ceph CLI as following in case of multi-cluster or non-default config: 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout: sudo /home/ubuntu/cephtest/cephadm shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout:Or, if you are only running a single cluster on this host: 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout: sudo /home/ubuntu/cephtest/cephadm shell 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout:Please consider enabling telemetry to help improve Ceph: 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout: ceph telemetry on 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout:For more information see: 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout: https://docs.ceph.com/en/latest/mgr/telemetry/ 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:18:53.469 INFO:teuthology.orchestra.run.vm03.stdout:Bootstrap complete. 2026-03-09T20:18:53.489 INFO:tasks.cephadm:Fetching config... 2026-03-09T20:18:53.490 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T20:18:53.490 DEBUG:teuthology.orchestra.run.vm03:> dd if=/etc/ceph/ceph.conf of=/dev/stdout 2026-03-09T20:18:53.492 INFO:tasks.cephadm:Fetching client.admin keyring... 2026-03-09T20:18:53.492 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T20:18:53.492 DEBUG:teuthology.orchestra.run.vm03:> dd if=/etc/ceph/ceph.client.admin.keyring of=/dev/stdout 2026-03-09T20:18:53.536 INFO:tasks.cephadm:Fetching mon keyring... 2026-03-09T20:18:53.536 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T20:18:53.536 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/keyring of=/dev/stdout 2026-03-09T20:18:53.584 INFO:tasks.cephadm:Fetching pub ssh key... 2026-03-09T20:18:53.584 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T20:18:53.584 DEBUG:teuthology.orchestra.run.vm03:> dd if=/home/ubuntu/cephtest/ceph.pub of=/dev/stdout 2026-03-09T20:18:53.627 INFO:tasks.cephadm:Installing pub ssh key for root users... 2026-03-09T20:18:53.627 DEBUG:teuthology.orchestra.run.vm03:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKYOtP+CCoJj5PG7okNI28jvvvgLejhXEDFiZz8gIZsN ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T20:18:53.683 INFO:teuthology.orchestra.run.vm03.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKYOtP+CCoJj5PG7okNI28jvvvgLejhXEDFiZz8gIZsN ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:53.688 DEBUG:teuthology.orchestra.run.vm04:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKYOtP+CCoJj5PG7okNI28jvvvgLejhXEDFiZz8gIZsN ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T20:18:53.701 INFO:teuthology.orchestra.run.vm04.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKYOtP+CCoJj5PG7okNI28jvvvgLejhXEDFiZz8gIZsN ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:53.706 DEBUG:teuthology.orchestra.run.vm08:> sudo install -d -m 0700 /root/.ssh && echo 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKYOtP+CCoJj5PG7okNI28jvvvgLejhXEDFiZz8gIZsN ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d' | sudo tee -a /root/.ssh/authorized_keys && sudo chmod 0600 /root/.ssh/authorized_keys 2026-03-09T20:18:53.718 INFO:teuthology.orchestra.run.vm08.stdout:ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKYOtP+CCoJj5PG7okNI28jvvvgLejhXEDFiZz8gIZsN ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:18:53.724 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph config set mgr mgr/cephadm/allow_ptrace true 2026-03-09T20:18:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:54 vm03 bash[20708]: audit 2026-03-09T20:18:52.444429+0000 mgr.a (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:54 vm03 bash[20708]: audit 2026-03-09T20:18:52.444429+0000 mgr.a (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:54 vm03 bash[20708]: audit 2026-03-09T20:18:52.722648+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:54 vm03 bash[20708]: audit 2026-03-09T20:18:52.722648+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:18:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:54 vm03 bash[20708]: audit 2026-03-09T20:18:53.431648+0000 mon.a (mon.0) 94 : audit [INF] from='client.? 192.168.123.103:0/242845642' entity='client.admin' 2026-03-09T20:18:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:54 vm03 bash[20708]: audit 2026-03-09T20:18:53.431648+0000 mon.a (mon.0) 94 : audit [INF] from='client.? 192.168.123.103:0/242845642' entity='client.admin' 2026-03-09T20:18:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:54 vm03 bash[20708]: cluster 2026-03-09T20:18:53.885903+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-09T20:18:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:54 vm03 bash[20708]: cluster 2026-03-09T20:18:53.885903+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-09T20:18:57.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:56 vm03 bash[20708]: audit 2026-03-09T20:18:55.809539+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:18:57.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:56 vm03 bash[20708]: audit 2026-03-09T20:18:55.809539+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:18:57.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:56 vm03 bash[20708]: audit 2026-03-09T20:18:56.389070+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:18:57.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:56 vm03 bash[20708]: audit 2026-03-09T20:18:56.389070+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:18:57.680 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:18:57.978 INFO:tasks.cephadm:Distributing conf and client.admin keyring to all hosts + 0755 2026-03-09T20:18:57.979 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph orch client-keyring set client.admin '*' --mode 0755 2026-03-09T20:18:59.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:58 vm03 bash[20708]: cluster 2026-03-09T20:18:57.816412+0000 mon.a (mon.0) 98 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-09T20:18:59.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:58 vm03 bash[20708]: cluster 2026-03-09T20:18:57.816412+0000 mon.a (mon.0) 98 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-09T20:18:59.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:58 vm03 bash[20708]: audit 2026-03-09T20:18:57.921012+0000 mon.a (mon.0) 99 : audit [INF] from='client.? 192.168.123.103:0/1330021691' entity='client.admin' 2026-03-09T20:18:59.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:18:58 vm03 bash[20708]: audit 2026-03-09T20:18:57.921012+0000 mon.a (mon.0) 99 : audit [INF] from='client.? 192.168.123.103:0/1330021691' entity='client.admin' 2026-03-09T20:19:02.598 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:19:02.941 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm04 2026-03-09T20:19:02.941 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T20:19:02.941 DEBUG:teuthology.orchestra.run.vm04:> dd of=/etc/ceph/ceph.conf 2026-03-09T20:19:02.944 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T20:19:02.944 DEBUG:teuthology.orchestra.run.vm04:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:02.989 INFO:tasks.cephadm:Adding host vm04 to orchestrator... 2026-03-09T20:19:02.989 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph orch host add vm04 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.122120+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.122120+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.124456+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.124456+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.125087+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.125087+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.127397+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.127397+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.132403+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.132403+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.134844+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.134844+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.847363+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.847363+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.847997+0000 mon.a (mon.0) 107 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.847997+0000 mon.a (mon.0) 107 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.848900+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.848900+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.849282+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.849282+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.993408+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.993408+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.997180+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.997180+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.999948+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:03.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:03 vm03 bash[20708]: audit 2026-03-09T20:19:02.999948+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:04.406 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:04 vm03 bash[20708]: audit 2026-03-09T20:19:02.844517+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:04.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:04 vm03 bash[20708]: audit 2026-03-09T20:19:02.844517+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:04.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:04 vm03 bash[20708]: cephadm 2026-03-09T20:19:02.849898+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:19:04.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:04 vm03 bash[20708]: cephadm 2026-03-09T20:19:02.849898+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:19:04.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:04 vm03 bash[20708]: cephadm 2026-03-09T20:19:02.885329+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:04.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:04 vm03 bash[20708]: cephadm 2026-03-09T20:19:02.885329+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:04.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:04 vm03 bash[20708]: cephadm 2026-03-09T20:19:02.922640+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:04.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:04 vm03 bash[20708]: cephadm 2026-03-09T20:19:02.922640+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:04.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:04 vm03 bash[20708]: cephadm 2026-03-09T20:19:02.959749+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:04.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:04 vm03 bash[20708]: cephadm 2026-03-09T20:19:02.959749+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:07.604 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:19:08.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:08 vm03 bash[20708]: audit 2026-03-09T20:19:07.862529+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:08.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:08 vm03 bash[20708]: audit 2026-03-09T20:19:07.862529+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:09.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:09 vm03 bash[20708]: cephadm 2026-03-09T20:19:08.408237+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-09T20:19:09.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:09 vm03 bash[20708]: cephadm 2026-03-09T20:19:08.408237+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-09T20:19:09.689 INFO:teuthology.orchestra.run.vm03.stdout:Added host 'vm04' with addr '192.168.123.104' 2026-03-09T20:19:09.764 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph orch host ls --format=json 2026-03-09T20:19:11.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:10 vm03 bash[20708]: audit 2026-03-09T20:19:09.687914+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:11.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:10 vm03 bash[20708]: audit 2026-03-09T20:19:09.687914+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:11.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:10 vm03 bash[20708]: cephadm 2026-03-09T20:19:09.688455+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm04 2026-03-09T20:19:11.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:10 vm03 bash[20708]: cephadm 2026-03-09T20:19:09.688455+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm04 2026-03-09T20:19:11.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:10 vm03 bash[20708]: audit 2026-03-09T20:19:09.688783+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:11.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:10 vm03 bash[20708]: audit 2026-03-09T20:19:09.688783+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:11.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:10 vm03 bash[20708]: audit 2026-03-09T20:19:09.973275+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:11.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:10 vm03 bash[20708]: audit 2026-03-09T20:19:09.973275+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:12.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:12 vm03 bash[20708]: cluster 2026-03-09T20:19:11.205275+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:12.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:12 vm03 bash[20708]: cluster 2026-03-09T20:19:11.205275+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:12.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:12 vm03 bash[20708]: audit 2026-03-09T20:19:11.238670+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:12.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:12 vm03 bash[20708]: audit 2026-03-09T20:19:11.238670+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:12.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:12 vm03 bash[20708]: audit 2026-03-09T20:19:11.765459+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:12.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:12 vm03 bash[20708]: audit 2026-03-09T20:19:11.765459+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:14.380 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:19:14.645 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:19:14.645 INFO:teuthology.orchestra.run.vm03.stdout:[{"addr": "192.168.123.103", "hostname": "vm03", "labels": [], "status": ""}, {"addr": "192.168.123.104", "hostname": "vm04", "labels": [], "status": ""}] 2026-03-09T20:19:14.655 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:14 vm03 bash[20708]: cluster 2026-03-09T20:19:13.205533+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:14.655 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:14 vm03 bash[20708]: cluster 2026-03-09T20:19:13.205533+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:14.694 INFO:tasks.cephadm:Writing (initial) conf and keyring to vm08 2026-03-09T20:19:14.694 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T20:19:14.694 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.conf 2026-03-09T20:19:14.697 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T20:19:14.697 DEBUG:teuthology.orchestra.run.vm08:> dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:14.743 INFO:tasks.cephadm:Adding host vm08 to orchestrator... 2026-03-09T20:19:14.743 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph orch host add vm08 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.431704+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.431704+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.434442+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.434442+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.440494+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.440494+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.442394+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.442394+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.442878+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.442878+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.443440+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.443440+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.443824+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.443824+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: cephadm 2026-03-09T20:19:14.444380+0000 mgr.a (mgr.14150) 20 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: cephadm 2026-03-09T20:19:14.444380+0000 mgr.a (mgr.14150) 20 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: cephadm 2026-03-09T20:19:14.478358+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: cephadm 2026-03-09T20:19:14.478358+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: cephadm 2026-03-09T20:19:14.511747+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: cephadm 2026-03-09T20:19:14.511747+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: cephadm 2026-03-09T20:19:14.540979+0000 mgr.a (mgr.14150) 23 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: cephadm 2026-03-09T20:19:14.540979+0000 mgr.a (mgr.14150) 23 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.570469+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.570469+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.572527+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.572527+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.574471+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.574471+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.645168+0000 mgr.a (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: audit 2026-03-09T20:19:14.645168+0000 mgr.a (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: cluster 2026-03-09T20:19:15.205759+0000 mgr.a (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:15.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:15 vm03 bash[20708]: cluster 2026-03-09T20:19:15.205759+0000 mgr.a (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:18.388 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:19:18.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:18 vm03 bash[20708]: cluster 2026-03-09T20:19:17.205974+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:18.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:18 vm03 bash[20708]: cluster 2026-03-09T20:19:17.205974+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:20.404 INFO:teuthology.orchestra.run.vm03.stdout:Added host 'vm08' with addr '192.168.123.108' 2026-03-09T20:19:20.465 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph orch host ls --format=json 2026-03-09T20:19:20.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:20 vm03 bash[20708]: audit 2026-03-09T20:19:18.640579+0000 mgr.a (mgr.14150) 27 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:20.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:20 vm03 bash[20708]: audit 2026-03-09T20:19:18.640579+0000 mgr.a (mgr.14150) 27 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:20.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:20 vm03 bash[20708]: cephadm 2026-03-09T20:19:19.163019+0000 mgr.a (mgr.14150) 28 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-09T20:19:20.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:20 vm03 bash[20708]: cephadm 2026-03-09T20:19:19.163019+0000 mgr.a (mgr.14150) 28 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-09T20:19:20.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:20 vm03 bash[20708]: cluster 2026-03-09T20:19:19.206180+0000 mgr.a (mgr.14150) 29 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:20.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:20 vm03 bash[20708]: cluster 2026-03-09T20:19:19.206180+0000 mgr.a (mgr.14150) 29 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:21.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:21 vm03 bash[20708]: audit 2026-03-09T20:19:20.404118+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:21 vm03 bash[20708]: audit 2026-03-09T20:19:20.404118+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:21 vm03 bash[20708]: cephadm 2026-03-09T20:19:20.404378+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Added host vm08 2026-03-09T20:19:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:21 vm03 bash[20708]: cephadm 2026-03-09T20:19:20.404378+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Added host vm08 2026-03-09T20:19:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:21 vm03 bash[20708]: audit 2026-03-09T20:19:20.404599+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:21 vm03 bash[20708]: audit 2026-03-09T20:19:20.404599+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:21 vm03 bash[20708]: audit 2026-03-09T20:19:20.710420+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:21 vm03 bash[20708]: audit 2026-03-09T20:19:20.710420+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:21 vm03 bash[20708]: cluster 2026-03-09T20:19:21.206339+0000 mgr.a (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:21 vm03 bash[20708]: cluster 2026-03-09T20:19:21.206339+0000 mgr.a (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:23.406 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:22 vm03 bash[20708]: audit 2026-03-09T20:19:21.976042+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:23.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:22 vm03 bash[20708]: audit 2026-03-09T20:19:21.976042+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:23.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:22 vm03 bash[20708]: audit 2026-03-09T20:19:22.540072+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:23.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:22 vm03 bash[20708]: audit 2026-03-09T20:19:22.540072+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:24.406 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:23 vm03 bash[20708]: cluster 2026-03-09T20:19:23.206501+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:24.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:23 vm03 bash[20708]: cluster 2026-03-09T20:19:23.206501+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:25.076 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:19:25.354 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:19:25.354 INFO:teuthology.orchestra.run.vm03.stdout:[{"addr": "192.168.123.103", "hostname": "vm03", "labels": [], "status": ""}, {"addr": "192.168.123.104", "hostname": "vm04", "labels": [], "status": ""}, {"addr": "192.168.123.108", "hostname": "vm08", "labels": [], "status": ""}] 2026-03-09T20:19:25.412 INFO:tasks.cephadm:Setting crush tunables to default 2026-03-09T20:19:25.412 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph osd crush tunables default 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: cluster 2026-03-09T20:19:25.206689+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: cluster 2026-03-09T20:19:25.206689+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.314711+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.314711+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.316608+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.316608+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.318926+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.318926+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.320485+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.320485+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.320864+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.320864+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.321399+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.321399+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.321741+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.321741+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: cephadm 2026-03-09T20:19:25.322295+0000 mgr.a (mgr.14150) 34 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: cephadm 2026-03-09T20:19:25.322295+0000 mgr.a (mgr.14150) 34 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: cephadm 2026-03-09T20:19:25.352154+0000 mgr.a (mgr.14150) 35 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: cephadm 2026-03-09T20:19:25.352154+0000 mgr.a (mgr.14150) 35 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.354551+0000 mgr.a (mgr.14150) 36 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.354551+0000 mgr.a (mgr.14150) 36 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.459565+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.459565+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.462187+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.462187+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.464379+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:26.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:26 vm03 bash[20708]: audit 2026-03-09T20:19:25.464379+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:27.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:27 vm03 bash[20708]: cephadm 2026-03-09T20:19:25.387821+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:27 vm03 bash[20708]: cephadm 2026-03-09T20:19:25.387821+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:27 vm03 bash[20708]: cephadm 2026-03-09T20:19:25.422186+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:27 vm03 bash[20708]: cephadm 2026-03-09T20:19:25.422186+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:28.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:28 vm03 bash[20708]: cluster 2026-03-09T20:19:27.206852+0000 mgr.a (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:28 vm03 bash[20708]: cluster 2026-03-09T20:19:27.206852+0000 mgr.a (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:29.084 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:19:30.323 INFO:teuthology.orchestra.run.vm03.stderr:adjusted tunables profile to default 2026-03-09T20:19:30.383 INFO:tasks.cephadm:Adding mon.a on vm03 2026-03-09T20:19:30.384 INFO:tasks.cephadm:Adding mon.b on vm04 2026-03-09T20:19:30.384 INFO:tasks.cephadm:Adding mon.c on vm08 2026-03-09T20:19:30.384 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph orch apply mon '3;vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm08:192.168.123.108=c' 2026-03-09T20:19:30.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:30 vm03 bash[20708]: cluster 2026-03-09T20:19:29.207096+0000 mgr.a (mgr.14150) 40 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:30 vm03 bash[20708]: cluster 2026-03-09T20:19:29.207096+0000 mgr.a (mgr.14150) 40 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:30 vm03 bash[20708]: audit 2026-03-09T20:19:29.372151+0000 mon.a (mon.0) 143 : audit [INF] from='client.? 192.168.123.103:0/1213967800' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T20:19:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:30 vm03 bash[20708]: audit 2026-03-09T20:19:29.372151+0000 mon.a (mon.0) 143 : audit [INF] from='client.? 192.168.123.103:0/1213967800' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T20:19:31.495 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:31.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:31 vm03 bash[20708]: audit 2026-03-09T20:19:30.322043+0000 mon.a (mon.0) 144 : audit [INF] from='client.? 192.168.123.103:0/1213967800' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T20:19:31.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:31 vm03 bash[20708]: audit 2026-03-09T20:19:30.322043+0000 mon.a (mon.0) 144 : audit [INF] from='client.? 192.168.123.103:0/1213967800' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T20:19:31.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:31 vm03 bash[20708]: cluster 2026-03-09T20:19:30.323901+0000 mon.a (mon.0) 145 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:31.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:31 vm03 bash[20708]: cluster 2026-03-09T20:19:30.323901+0000 mon.a (mon.0) 145 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:31.920 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled mon update... 2026-03-09T20:19:32.021 DEBUG:teuthology.orchestra.run.vm04:mon.b> sudo journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.b.service 2026-03-09T20:19:32.022 DEBUG:teuthology.orchestra.run.vm08:mon.c> sudo journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.c.service 2026-03-09T20:19:32.022 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T20:19:32.022 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph mon dump -f json 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: cluster 2026-03-09T20:19:31.207264+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: cluster 2026-03-09T20:19:31.207264+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.910091+0000 mgr.a (mgr.14150) 42 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm08:192.168.123.108=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.910091+0000 mgr.a (mgr.14150) 42 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm08:192.168.123.108=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: cephadm 2026-03-09T20:19:31.911368+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Saving service mon spec with placement vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm08:192.168.123.108=c;count:3 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: cephadm 2026-03-09T20:19:31.911368+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Saving service mon spec with placement vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm08:192.168.123.108=c;count:3 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.919771+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.919771+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.920479+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.920479+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.921530+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.921530+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.921998+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.921998+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.929885+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.929885+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.931043+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.931043+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.931471+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: audit 2026-03-09T20:19:31.931471+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: cephadm 2026-03-09T20:19:31.931986+0000 mgr.a (mgr.14150) 44 : cephadm [INF] Deploying daemon mon.c on vm08 2026-03-09T20:19:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:32 vm03 bash[20708]: cephadm 2026-03-09T20:19:31.931986+0000 mgr.a (mgr.14150) 44 : cephadm [INF] Deploying daemon mon.c on vm08 2026-03-09T20:19:33.175 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.c/config 2026-03-09T20:19:33.520 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 systemd[1]: Started Ceph mon.c for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d. 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 0 pidfile_write: ignore empty --pid-file 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 0 load: jerasure load: lrc 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Git sha 0 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: DB SUMMARY 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: DB Session ID: 5T8NKODTXXP40CYQEYHZ 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-c/store.db dir, Total Num: 0, files: 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-c/store.db: 000004.log size: 511 ; 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.env: 0x55e052bbddc0 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.info_log: 0x55e059257880 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.statistics: (nil) 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.use_fsync: 0 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T20:19:33.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.db_log_dir: 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.wal_dir: 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.write_buffer_manager: 0x55e05925b900 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.unordered_write: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.row_cache: None 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.wal_filter: None 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T20:19:33.810 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.wal_compression: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_open_files: -1 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Compression algorithms supported: 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: kZSTD supported: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.merge_operator: 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compaction_filter: None 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e059256480) 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cache_index_and_filter_blocks: 1 2026-03-09T20:19:33.811 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: pin_top_level_index_and_filter: 1 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: index_type: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: data_block_index_type: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: index_shortening: 1 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: checksum: 4 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: no_block_cache: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: block_cache: 0x55e05927d350 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: block_cache_name: BinnedLRUCache 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: block_cache_options: 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: capacity : 536870912 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: num_shard_bits : 4 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: strict_capacity_limit : 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: high_pri_pool_ratio: 0.000 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: block_cache_compressed: (nil) 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: persistent_cache: (nil) 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: block_size: 4096 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: block_size_deviation: 10 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: block_restart_interval: 16 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: index_block_restart_interval: 1 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: metadata_block_size: 4096 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: partition_filters: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: use_delta_encoding: 1 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: filter_policy: bloomfilter 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: whole_key_filtering: 1 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: verify_compression: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: read_amp_bytes_per_bit: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: format_version: 5 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: enable_index_compression: 1 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: block_align: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: max_auto_readahead_size: 262144 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: prepopulate_block_cache: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: initial_auto_readahead_size: 8192 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: num_file_reads_for_auto_readahead: 2 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compression: NoCompression 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.num_levels: 7 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T20:19:33.812 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.ttl: 2592000 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.608+0000 7f8c2334ed80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T20:19:33.813 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-c/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ba3b3f32-7859-4cf4-bf86-33766580cbc8 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773087573619669, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.612+0000 7f8c2334ed80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.620+0000 7f8c2334ed80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773087573624613, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773087573, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ba3b3f32-7859-4cf4-bf86-33766580cbc8", "db_session_id": "5T8NKODTXXP40CYQEYHZ", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.620+0000 7f8c2334ed80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773087573624692, "job": 1, "event": "recovery_finished"} 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.620+0000 7f8c2334ed80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.620+0000 7f8c2334ed80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-c/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.620+0000 7f8c2334ed80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e05927ee00 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.620+0000 7f8c2334ed80 4 rocksdb: DB pointer 0x55e05938a000 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.624+0000 7f8c2334ed80 0 mon.c does not exist in monmap, will attempt to join an existing cluster 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.624+0000 7f8c2334ed80 0 using public_addr v2:192.168.123.108:0/0 -> [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.624+0000 7f8c2334ed80 0 starting mon.c rank -1 at public addrs [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] at bind addrs [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon_data /var/lib/ceph/mon/ceph-c fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.624+0000 7f8c2334ed80 1 mon.c@-1(???) e0 preinit fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.624+0000 7f8c19118640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.624+0000 7f8c19118640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: ** DB Stats ** 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: ** Compaction Stats [default] ** 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: ** Compaction Stats [default] ** 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.3 0.00 0.00 1 0.005 0 0 0.0 0.0 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Block cache BinnedLRUCache@0x55e05927d350#7 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1.1e-05 secs_since: 0 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.644+0000 7f8c1c11e640 0 mon.c@-1(synchronizing).mds e1 new map 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.644+0000 7f8c1c11e640 0 mon.c@-1(synchronizing).mds e1 print_map 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: e1 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: btime 2026-03-09T20:18:31:513185+0000 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: legacy client fscid: -1 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: No filesystems configured 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.644+0000 7f8c1c11e640 1 mon.c@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-09T20:19:33.814 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.644+0000 7f8c1c11e640 1 mon.c@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.644+0000 7f8c1c11e640 1 mon.c@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.648+0000 7f8c1c11e640 1 mon.c@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.648+0000 7f8c1c11e640 1 mon.c@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.648+0000 7f8c1c11e640 1 mon.c@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.648+0000 7f8c1c11e640 0 mon.c@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.648+0000 7f8c1c11e640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.648+0000 7f8c1c11e640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.648+0000 7f8c1c11e640 0 mon.c@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:31.513696+0000 mon.a (mon.0) 0 : cluster [INF] mkfs f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:31.513696+0000 mon.a (mon.0) 0 : cluster [INF] mkfs f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:31.504964+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:31.504964+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488494+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488494+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488527+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488527+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488532+0000 mon.a (mon.0) 3 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488532+0000 mon.a (mon.0) 3 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488536+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488536+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488543+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488543+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488547+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488547+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488550+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488550+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488554+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488554+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488752+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488752+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488767+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.488767+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.489232+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:32.489232+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:32.712498+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.103:0/2780831427' entity='client.admin' 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:32.712498+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.103:0/2780831427' entity='client.admin' 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:33.269522+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.103:0/1660484640' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:33.269522+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.103:0/1660484640' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:35.511427+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.103:0/2523825932' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:35.511427+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.103:0/2523825932' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:36.068742+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:36.068742+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:36.072300+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.0036244s) 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:36.072300+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.0036244s) 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.073529+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.073529+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.073584+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.073584+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.073834+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.073834+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.073893+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.073893+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.073945+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.073945+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.073999+0000 mon.a (mon.0) 22 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.073999+0000 mon.a (mon.0) 22 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.074589+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.074589+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.075114+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.075114+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:36.078597+0000 mon.a (mon.0) 25 : cluster [INF] Manager daemon a is now available 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:36.078597+0000 mon.a (mon.0) 25 : cluster [INF] Manager daemon a is now available 2026-03-09T20:19:33.815 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.086449+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.086449+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.089242+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.089242+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.089450+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.089450+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.091339+0000 mon.a (mon.0) 29 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.091339+0000 mon.a (mon.0) 29 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.093120+0000 mon.a (mon.0) 30 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:36.093120+0000 mon.a (mon.0) 30 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:37.076795+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e3: a(active, since 1.00812s) 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:37.076795+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e3: a(active, since 1.00812s) 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:37.794105+0000 mon.a (mon.0) 32 : audit [DBG] from='client.? 192.168.123.103:0/288382585' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:37.794105+0000 mon.a (mon.0) 32 : audit [DBG] from='client.? 192.168.123.103:0/288382585' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:38.040515+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.103:0/1974182671' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:38.040515+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.103:0/1974182671' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:38.042727+0000 mon.a (mon.0) 34 : audit [INF] from='client.? 192.168.123.103:0/1974182671' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:38.042727+0000 mon.a (mon.0) 34 : audit [INF] from='client.? 192.168.123.103:0/1974182671' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:38.290171+0000 mon.a (mon.0) 35 : audit [INF] from='client.? 192.168.123.103:0/936723196' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:38.290171+0000 mon.a (mon.0) 35 : audit [INF] from='client.? 192.168.123.103:0/936723196' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:39.044376+0000 mon.a (mon.0) 36 : audit [INF] from='client.? 192.168.123.103:0/936723196' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:39.044376+0000 mon.a (mon.0) 36 : audit [INF] from='client.? 192.168.123.103:0/936723196' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:39.046414+0000 mon.a (mon.0) 37 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:39.046414+0000 mon.a (mon.0) 37 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:39.350654+0000 mon.a (mon.0) 38 : audit [DBG] from='client.? 192.168.123.103:0/3285276427' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:39.350654+0000 mon.a (mon.0) 38 : audit [DBG] from='client.? 192.168.123.103:0/3285276427' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:41.960535+0000 mon.a (mon.0) 39 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:41.960535+0000 mon.a (mon.0) 39 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:41.960722+0000 mon.a (mon.0) 40 : cluster [INF] Activating manager daemon a 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:41.960722+0000 mon.a (mon.0) 40 : cluster [INF] Activating manager daemon a 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:41.965139+0000 mon.a (mon.0) 41 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:41.965139+0000 mon.a (mon.0) 41 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:41.965278+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00463008s) 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:41.965278+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00463008s) 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.967601+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.967601+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.967748+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.967748+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.968320+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.968320+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.968438+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.968438+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.968587+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.968587+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:41.972913+0000 mon.a (mon.0) 48 : cluster [INF] Manager daemon a is now available 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:41.972913+0000 mon.a (mon.0) 48 : cluster [INF] Manager daemon a is now available 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.981206+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.981206+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.984297+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.984297+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.994163+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.994163+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.994435+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.994435+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.816 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.995360+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:41.995360+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:42.002675+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:42.002675+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:41.978536+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:41.978536+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:42.969217+0000 mon.a (mon.0) 55 : cluster [DBG] mgrmap e6: a(active, since 1.00857s) 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:42.969217+0000 mon.a (mon.0) 55 : cluster [DBG] mgrmap e6: a(active, since 1.00857s) 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:42.970034+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:42.970034+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:42.974105+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:42.974105+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.245897+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.245897+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.255451+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.255451+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.400912+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.400912+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.403394+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.403394+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.859745+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.859745+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.862749+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.862749+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.241366+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.241366+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.533912+0000 mgr.a (mgr.14118) 5 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.533912+0000 mgr.a (mgr.14118) 5 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.842921+0000 mgr.a (mgr.14118) 6 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:43.842921+0000 mgr.a (mgr.14118) 6 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:43.843137+0000 mgr.a (mgr.14118) 7 : cephadm [INF] Generating ssh key... 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:43.843137+0000 mgr.a (mgr.14118) 7 : cephadm [INF] Generating ssh key... 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:44.420368+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:44.420368+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:44.867659+0000 mon.a (mon.0) 63 : cluster [DBG] mgrmap e7: a(active, since 2s) 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:44.867659+0000 mon.a (mon.0) 63 : cluster [DBG] mgrmap e7: a(active, since 2s) 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:44.184615+0000 mgr.a (mgr.14118) 8 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:44.184615+0000 mgr.a (mgr.14118) 8 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:44.210040+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Bus STARTING 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:44.210040+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Bus STARTING 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:44.311449+0000 mgr.a (mgr.14118) 10 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:44.311449+0000 mgr.a (mgr.14118) 10 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:44.419682+0000 mgr.a (mgr.14118) 11 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:44.419682+0000 mgr.a (mgr.14118) 11 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:44.419729+0000 mgr.a (mgr.14118) 12 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Bus STARTED 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:44.419729+0000 mgr.a (mgr.14118) 12 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Bus STARTED 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:44.420341+0000 mgr.a (mgr.14118) 13 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Client ('192.168.123.103', 59018) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:44.420341+0000 mgr.a (mgr.14118) 13 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Client ('192.168.123.103', 59018) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:44.449710+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:44.449710+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.817 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:44.990494+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:44.990494+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:46.234344+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:46.234344+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:46.234694+0000 mgr.a (mgr.14118) 16 : cephadm [INF] Added host vm03 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:46.234694+0000 mgr.a (mgr.14118) 16 : cephadm [INF] Added host vm03 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:46.235661+0000 mon.a (mon.0) 65 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:46.235661+0000 mon.a (mon.0) 65 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:46.521993+0000 mgr.a (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:46.521993+0000 mgr.a (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:46.522859+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:46.522859+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:46.527027+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:46.527027+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:46.768856+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:46.768856+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:46.769511+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:46.769511+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:46.771854+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:46.771854+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:47.009342+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.103:0/776762689' entity='client.admin' 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:47.009342+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.103:0/776762689' entity='client.admin' 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:47.250597+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.103:0/1643594369' entity='client.admin' 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:47.250597+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.103:0/1643594369' entity='client.admin' 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:47.532930+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.103:0/2743274158' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:47.532930+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.103:0/2743274158' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:47.657547+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:47.657547+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:47.918753+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:47.918753+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:48.255996+0000 mon.a (mon.0) 73 : audit [INF] from='client.? 192.168.123.103:0/2743274158' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T20:19:33.818 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:48.255996+0000 mon.a (mon.0) 73 : audit [INF] from='client.? 192.168.123.103:0/2743274158' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:48.259467+0000 mon.a (mon.0) 74 : cluster [DBG] mgrmap e8: a(active, since 6s) 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:48.259467+0000 mon.a (mon.0) 74 : cluster [DBG] mgrmap e8: a(active, since 6s) 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:48.562228+0000 mon.a (mon.0) 75 : audit [DBG] from='client.? 192.168.123.103:0/446170918' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:48.562228+0000 mon.a (mon.0) 75 : audit [DBG] from='client.? 192.168.123.103:0/446170918' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:51.194813+0000 mon.a (mon.0) 76 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:51.194813+0000 mon.a (mon.0) 76 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:51.195258+0000 mon.a (mon.0) 77 : cluster [INF] Activating manager daemon a 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:51.195258+0000 mon.a (mon.0) 77 : cluster [INF] Activating manager daemon a 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:51.200372+0000 mon.a (mon.0) 78 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:51.200372+0000 mon.a (mon.0) 78 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:51.200503+0000 mon.a (mon.0) 79 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00534097s) 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:51.200503+0000 mon.a (mon.0) 79 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00534097s) 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.202704+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.202704+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.203030+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.203030+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.203826+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.203826+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.204155+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.204155+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.204470+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.204470+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:51.209854+0000 mon.a (mon.0) 85 : cluster [INF] Manager daemon a is now available 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:51.209854+0000 mon.a (mon.0) 85 : cluster [INF] Manager daemon a is now available 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.226412+0000 mon.a (mon.0) 86 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.226412+0000 mon.a (mon.0) 86 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.241984+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.241984+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.244288+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:51.244288+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:52.031305+0000 mgr.a (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Bus STARTING 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:52.031305+0000 mgr.a (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Bus STARTING 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:52.139156+0000 mgr.a (mgr.14150) 2 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:52.139156+0000 mgr.a (mgr.14150) 2 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:52.139627+0000 mgr.a (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Client ('192.168.123.103', 33438) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:52.139627+0000 mgr.a (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Client ('192.168.123.103', 33438) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:52.203151+0000 mon.a (mon.0) 89 : cluster [DBG] mgrmap e10: a(active, since 1.00799s) 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:52.203151+0000 mon.a (mon.0) 89 : cluster [DBG] mgrmap e10: a(active, since 1.00799s) 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:52.203763+0000 mgr.a (mgr.14150) 4 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:52.203763+0000 mgr.a (mgr.14150) 4 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:52.207504+0000 mgr.a (mgr.14150) 5 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:52.207504+0000 mgr.a (mgr.14150) 5 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:52.240770+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:52.240770+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:52.240821+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Bus STARTED 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:18:52.240821+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Bus STARTED 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:52.470914+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:52.470914+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:52.472985+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:52.472985+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:52.881473+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:52.881473+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:53.120978+0000 mon.a (mon.0) 93 : audit [DBG] from='client.? 192.168.123.103:0/1001944308' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:53.120978+0000 mon.a (mon.0) 93 : audit [DBG] from='client.? 192.168.123.103:0/1001944308' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:52.444429+0000 mgr.a (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:52.444429+0000 mgr.a (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:52.722648+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:52.722648+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.819 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:53.431648+0000 mon.a (mon.0) 94 : audit [INF] from='client.? 192.168.123.103:0/242845642' entity='client.admin' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:53.431648+0000 mon.a (mon.0) 94 : audit [INF] from='client.? 192.168.123.103:0/242845642' entity='client.admin' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:53.885903+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:53.885903+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:55.809539+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:55.809539+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:56.389070+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:56.389070+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:57.816412+0000 mon.a (mon.0) 98 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:18:57.816412+0000 mon.a (mon.0) 98 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:57.921012+0000 mon.a (mon.0) 99 : audit [INF] from='client.? 192.168.123.103:0/1330021691' entity='client.admin' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:18:57.921012+0000 mon.a (mon.0) 99 : audit [INF] from='client.? 192.168.123.103:0/1330021691' entity='client.admin' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.122120+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.122120+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.124456+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.124456+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.125087+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.125087+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.127397+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.127397+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.132403+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.132403+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.134844+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.134844+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.847363+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.847363+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.847997+0000 mon.a (mon.0) 107 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.847997+0000 mon.a (mon.0) 107 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.848900+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.848900+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.849282+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.849282+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.993408+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.993408+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.997180+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.997180+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.999948+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.999948+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.844517+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:02.844517+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:02.849898+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:02.849898+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:02.885329+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:02.885329+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:02.922640+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:02.922640+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:02.959749+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:02.959749+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:07.862529+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:07.862529+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:08.408237+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:08.408237+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:09.687914+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:09.687914+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:09.688455+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm04 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:09.688455+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm04 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:09.688783+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:09.688783+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:09.973275+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:09.973275+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:11.205275+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.820 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:11.205275+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:11.238670+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:11.238670+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:11.765459+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:11.765459+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:13.205533+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:13.205533+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.431704+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.431704+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.434442+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.434442+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.440494+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.440494+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.442394+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.442394+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.442878+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.442878+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.443440+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.443440+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.443824+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.443824+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:14.444380+0000 mgr.a (mgr.14150) 20 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:14.444380+0000 mgr.a (mgr.14150) 20 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:14.478358+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:14.478358+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:14.511747+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:14.511747+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:14.540979+0000 mgr.a (mgr.14150) 23 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:14.540979+0000 mgr.a (mgr.14150) 23 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.570469+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.570469+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.572527+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.572527+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.574471+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.574471+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.645168+0000 mgr.a (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:14.645168+0000 mgr.a (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:15.205759+0000 mgr.a (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:15.205759+0000 mgr.a (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:17.205974+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:17.205974+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:18.640579+0000 mgr.a (mgr.14150) 27 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:18.640579+0000 mgr.a (mgr.14150) 27 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:19.163019+0000 mgr.a (mgr.14150) 28 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:19.163019+0000 mgr.a (mgr.14150) 28 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:19.206180+0000 mgr.a (mgr.14150) 29 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:19.206180+0000 mgr.a (mgr.14150) 29 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:20.404118+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:20.404118+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:20.404378+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Added host vm08 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:20.404378+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Added host vm08 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:20.404599+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:20.404599+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:20.710420+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:20.710420+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:21.206339+0000 mgr.a (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:21.206339+0000 mgr.a (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:21.976042+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.821 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:21.976042+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:22.540072+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:22.540072+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:23.206501+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:23.206501+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:25.206689+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:25.206689+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.314711+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.314711+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.316608+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.316608+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.318926+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.318926+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.320485+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.320485+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.320864+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.320864+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.321399+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.321399+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.321741+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.321741+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:25.322295+0000 mgr.a (mgr.14150) 34 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:25.322295+0000 mgr.a (mgr.14150) 34 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:25.352154+0000 mgr.a (mgr.14150) 35 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:25.352154+0000 mgr.a (mgr.14150) 35 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.354551+0000 mgr.a (mgr.14150) 36 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.354551+0000 mgr.a (mgr.14150) 36 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.459565+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.459565+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.462187+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.462187+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.464379+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:25.464379+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:25.387821+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:25.387821+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:25.422186+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:25.422186+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:27.206852+0000 mgr.a (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:27.206852+0000 mgr.a (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:29.207096+0000 mgr.a (mgr.14150) 40 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:29.207096+0000 mgr.a (mgr.14150) 40 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:29.372151+0000 mon.a (mon.0) 143 : audit [INF] from='client.? 192.168.123.103:0/1213967800' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:29.372151+0000 mon.a (mon.0) 143 : audit [INF] from='client.? 192.168.123.103:0/1213967800' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:30.322043+0000 mon.a (mon.0) 144 : audit [INF] from='client.? 192.168.123.103:0/1213967800' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:30.322043+0000 mon.a (mon.0) 144 : audit [INF] from='client.? 192.168.123.103:0/1213967800' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:30.323901+0000 mon.a (mon.0) 145 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:30.323901+0000 mon.a (mon.0) 145 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:31.207264+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:31.207264+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.910091+0000 mgr.a (mgr.14150) 42 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm08:192.168.123.108=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.910091+0000 mgr.a (mgr.14150) 42 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm08:192.168.123.108=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:31.911368+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Saving service mon spec with placement vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm08:192.168.123.108=c;count:3 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:31.911368+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Saving service mon spec with placement vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm08:192.168.123.108=c;count:3 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.919771+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.919771+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.920479+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.920479+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.921530+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:33.822 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.921530+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:33.823 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.921998+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:33.823 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.921998+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:33.823 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.929885+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.823 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.929885+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:33.823 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.931043+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:33.823 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.931043+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:33.823 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.931471+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:33.823 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: audit 2026-03-09T20:19:31.931471+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:33.823 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:31.931986+0000 mgr.a (mgr.14150) 44 : cephadm [INF] Deploying daemon mon.c on vm08 2026-03-09T20:19:33.823 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cephadm 2026-03-09T20:19:31.931986+0000 mgr.a (mgr.14150) 44 : cephadm [INF] Deploying daemon mon.c on vm08 2026-03-09T20:19:33.823 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:33.207434+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.823 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: cluster 2026-03-09T20:19:33.207434+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.823 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:33 vm08 bash[23232]: debug 2026-03-09T20:19:33.668+0000 7f8c1c11e640 1 mon.c@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T20:19:33.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:33 vm03 bash[20708]: cluster 2026-03-09T20:19:33.207434+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:33.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:33 vm03 bash[20708]: cluster 2026-03-09T20:19:33.207434+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.064 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 systemd[1]: Started Ceph mon.b for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d. 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.174+0000 7f90b3b4cd80 0 set uid:gid to 167:167 (ceph:ceph) 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.174+0000 7f90b3b4cd80 0 ceph version 19.2.3-678-ge911bdeb (e911bdebe5c8faa3800735d1568fcdca65db60df) squid (stable), process ceph-mon, pid 7 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.174+0000 7f90b3b4cd80 0 pidfile_write: ignore empty --pid-file 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 0 load: jerasure load: lrc 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: RocksDB version: 7.9.2 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Git sha 0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Compile date 2026-02-25 18:11:04 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: DB SUMMARY 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: DB Session ID: G8HZ7MWUYHIFT66U796Q 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: CURRENT file: CURRENT 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: IDENTITY file: IDENTITY 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: MANIFEST file: MANIFEST-000005 size: 59 Bytes 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-b/store.db dir, Total Num: 0, files: 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-b/store.db: 000004.log size: 511 ; 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.error_if_exists: 0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.create_if_missing: 0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.paranoid_checks: 1 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.flush_verify_memtable_count: 1 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.track_and_verify_wals_in_manifest: 0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.verify_sst_unique_id_in_manifest: 1 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.env: 0x5622835eddc0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.fs: PosixFileSystem 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.info_log: 0x562289a2d880 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_file_opening_threads: 16 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.statistics: (nil) 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.use_fsync: 0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_log_file_size: 0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_manifest_file_size: 1073741824 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.log_file_time_to_roll: 0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.keep_log_file_num: 1000 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.recycle_log_file_num: 0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.allow_fallocate: 1 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.allow_mmap_reads: 0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.allow_mmap_writes: 0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.use_direct_reads: 0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.create_missing_column_families: 0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.db_log_dir: 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.wal_dir: 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.table_cache_numshardbits: 6 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.WAL_ttl_seconds: 0 2026-03-09T20:19:35.371 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.WAL_size_limit_MB: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_write_batch_group_size_bytes: 1048576 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.manifest_preallocation_size: 4194304 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.is_fd_close_on_exec: 1 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.advise_random_on_open: 1 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.db_write_buffer_size: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.write_buffer_manager: 0x562289a31900 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.access_hint_on_compaction_start: 1 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.random_access_max_buffer_size: 1048576 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.use_adaptive_mutex: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.rate_limiter: (nil) 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.wal_recovery_mode: 2 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.enable_thread_tracking: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.enable_pipelined_write: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.unordered_write: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.allow_concurrent_memtable_write: 1 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.write_thread_max_yield_usec: 100 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.write_thread_slow_yield_usec: 3 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.row_cache: None 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.wal_filter: None 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.avoid_flush_during_recovery: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.allow_ingest_behind: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.two_write_queues: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.manual_wal_flush: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.wal_compression: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.atomic_flush: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.avoid_unnecessary_blocking_io: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.persist_stats_to_disk: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.write_dbid_to_manifest: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.log_readahead_size: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.file_checksum_gen_factory: Unknown 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.best_efforts_recovery: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_bgerror_resume_count: 2147483647 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.bgerror_resume_retry_interval: 1000000 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.allow_data_in_errors: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.db_host_id: __hostname__ 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.enforce_single_del_contracts: true 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_background_jobs: 2 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_background_compactions: -1 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_subcompactions: 1 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.avoid_flush_during_shutdown: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.writable_file_max_buffer_size: 1048576 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.delayed_write_rate : 16777216 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_total_wal_size: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.stats_dump_period_sec: 600 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.stats_persist_period_sec: 600 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.stats_history_buffer_size: 1048576 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_open_files: -1 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.bytes_per_sync: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.wal_bytes_per_sync: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.strict_bytes_per_sync: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compaction_readahead_size: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_background_flushes: -1 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Compression algorithms supported: 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: kZSTD supported: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: kXpressCompression supported: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: kBZip2Compression supported: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: kZSTDNotFinalCompression supported: 0 2026-03-09T20:19:35.372 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: kLZ4Compression supported: 1 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: kZlibCompression supported: 1 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: kLZ4HCCompression supported: 1 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: kSnappyCompression supported: 1 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Fast CRC32 supported: Supported on x86 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: DMutex implementation: pthread_mutex_t 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000005 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]: 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.comparator: leveldb.BytewiseComparator 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.merge_operator: 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compaction_filter: None 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compaction_filter_factory: None 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.sst_partitioner_factory: None 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.memtable_factory: SkipListFactory 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.table_factory: BlockBasedTable 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562289a2c480) 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cache_index_and_filter_blocks: 1 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cache_index_and_filter_blocks_with_high_priority: 0 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: pin_l0_filter_and_index_blocks_in_cache: 0 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: pin_top_level_index_and_filter: 1 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: index_type: 0 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: data_block_index_type: 0 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: index_shortening: 1 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: data_block_hash_table_util_ratio: 0.750000 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: checksum: 4 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: no_block_cache: 0 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: block_cache: 0x562289a53350 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: block_cache_name: BinnedLRUCache 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: block_cache_options: 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: capacity : 536870912 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: num_shard_bits : 4 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: strict_capacity_limit : 0 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: high_pri_pool_ratio: 0.000 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: block_cache_compressed: (nil) 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: persistent_cache: (nil) 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: block_size: 4096 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: block_size_deviation: 10 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: block_restart_interval: 16 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: index_block_restart_interval: 1 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: metadata_block_size: 4096 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: partition_filters: 0 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: use_delta_encoding: 1 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: filter_policy: bloomfilter 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: whole_key_filtering: 1 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: verify_compression: 0 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: read_amp_bytes_per_bit: 0 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: format_version: 5 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: enable_index_compression: 1 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: block_align: 0 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: max_auto_readahead_size: 262144 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: prepopulate_block_cache: 0 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: initial_auto_readahead_size: 8192 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: num_file_reads_for_auto_readahead: 2 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.write_buffer_size: 33554432 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_write_buffer_number: 2 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compression: NoCompression 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.bottommost_compression: Disabled 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.prefix_extractor: nullptr 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.num_levels: 7 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.min_write_buffer_number_to_merge: 1 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_write_buffer_size_to_maintain: 0 2026-03-09T20:19:35.373 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.bottommost_compression_opts.level: 32767 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.bottommost_compression_opts.strategy: 0 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.bottommost_compression_opts.parallel_threads: 1 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.bottommost_compression_opts.enabled: false 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.bottommost_compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.bottommost_compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compression_opts.window_bits: -14 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compression_opts.level: 32767 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compression_opts.strategy: 0 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compression_opts.max_dict_bytes: 0 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compression_opts.use_zstd_dict_trainer: true 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compression_opts.parallel_threads: 1 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compression_opts.enabled: false 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compression_opts.max_dict_buffer_bytes: 0 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.level0_file_num_compaction_trigger: 4 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.level0_slowdown_writes_trigger: 20 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.level0_stop_writes_trigger: 36 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.target_file_size_base: 67108864 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.target_file_size_multiplier: 1 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_bytes_for_level_base: 268435456 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_sequential_skip_in_iterations: 8 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_compaction_bytes: 1677721600 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.ignore_max_compaction_bytes_for_input: true 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.arena_block_size: 1048576 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.disable_auto_compactions: 0 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compaction_style: kCompactionStyleLevel 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compaction_pri: kMinOverlappingRatio 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compaction_options_universal.size_ratio: 1 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0 2026-03-09T20:19:35.374 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0); 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.inplace_update_support: 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.inplace_update_num_locks: 10000 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.memtable_whole_key_filtering: 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.memtable_huge_page_size: 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.bloom_locality: 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.max_successive_merges: 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.optimize_filters_for_hits: 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.paranoid_file_checks: 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.force_consistency_checks: 1 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.report_bg_io_stats: 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.ttl: 2592000 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.periodic_compaction_seconds: 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.preclude_last_level_data_seconds: 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.preserve_internal_time_seconds: 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.enable_blob_files: false 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.min_blob_size: 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.blob_file_size: 268435456 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.blob_compression_type: NoCompression 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.enable_blob_garbage_collection: false 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.blob_garbage_collection_age_cutoff: 0.250000 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.blob_compaction_readahead_size: 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.blob_file_starting_level: 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.178+0000 7f90b3b4cd80 4 rocksdb: Options.experimental_mempurge_threshold: 0.000000 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.182+0000 7f90b3b4cd80 4 rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-b/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.182+0000 7f90b3b4cd80 4 rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.182+0000 7f90b3b4cd80 4 rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 76247a17-b51e-4363-8aed-45ee60e1f11f 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.182+0000 7f90b3b4cd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773087575185620, "job": 1, "event": "recovery_started", "wal_files": [4]} 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.182+0000 7f90b3b4cd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.182+0000 7f90b3b4cd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773087575186842, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1773087575, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "76247a17-b51e-4363-8aed-45ee60e1f11f", "db_session_id": "G8HZ7MWUYHIFT66U796Q", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}} 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.182+0000 7f90b3b4cd80 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1773087575186897, "job": 1, "event": "recovery_finished"} 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.182+0000 7f90b3b4cd80 4 rocksdb: [db/version_set.cc:5047] Creating manifest 10 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.186+0000 7f90b3b4cd80 4 rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-b/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.186+0000 7f90b3b4cd80 4 rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562289a54e00 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.186+0000 7f90b3b4cd80 4 rocksdb: DB pointer 0x562289b60000 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.186+0000 7f90b3b4cd80 0 mon.b does not exist in monmap, will attempt to join an existing cluster 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.186+0000 7f90b3b4cd80 0 using public_addr v2:192.168.123.104:0/0 -> [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] 2026-03-09T20:19:35.375 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.186+0000 7f90b3b4cd80 0 starting mon.b rank -1 at public addrs [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] at bind addrs [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon_data /var/lib/ceph/mon/ceph-b fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.186+0000 7f90b3b4cd80 1 mon.b@-1(???) e0 preinit fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.186+0000 7f90a9916640 4 rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS ------- 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.186+0000 7f90a9916640 4 rocksdb: [db/db_impl/db_impl.cc:1111] 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: ** DB Stats ** 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Interval stall: 00:00:0.000 H:M:S, 0.0 percent 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: ** Compaction Stats [default] ** 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Level Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: L0 1/0 1.60 KB 0.2 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.3 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Sum 1/0 1.60 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.3 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Int 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 1.3 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: ** Compaction Stats [default] ** 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Priority Files Size Score Read(GB) Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB) 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: User 0/0 0.00 KB 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.3 0.00 0.00 1 0.001 0 0 0.0 0.0 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Uptime(secs): 0.0 total, 0.0 interval 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Flush(GB): cumulative 0.000, interval 0.000 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: AddFile(GB): cumulative 0.000, interval 0.000 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: AddFile(Total Files): cumulative 0, interval 0 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: AddFile(L0 Files): cumulative 0, interval 0 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: AddFile(Keys): cumulative 0, interval 0 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Cumulative compaction: 0.00 GB write, 0.18 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Interval compaction: 0.00 GB write, 0.18 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Block cache BinnedLRUCache@0x562289a53350#7 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-06 secs_since: 0 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%) 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: ** File Read Latency Histogram By Level [default] ** 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.206+0000 7f90ac91c640 0 mon.b@-1(synchronizing).mds e1 new map 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.206+0000 7f90ac91c640 0 mon.b@-1(synchronizing).mds e1 print_map 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: e1 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: btime 2026-03-09T20:18:31:513185+0000 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: enable_multiple, ever_enabled_multiple: 1,1 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes} 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: legacy client fscid: -1 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: No filesystems configured 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.206+0000 7f90ac91c640 1 mon.b@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.206+0000 7f90ac91c640 1 mon.b@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.206+0000 7f90ac91c640 1 mon.b@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.206+0000 7f90ac91c640 1 mon.b@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.206+0000 7f90ac91c640 1 mon.b@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.206+0000 7f90ac91c640 1 mon.b@-1(synchronizing).osd e4 e4: 0 total, 0 up, 0 in 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.206+0000 7f90ac91c640 0 mon.b@-1(synchronizing).osd e4 crush map has features 3314932999778484224, adjusting msgr requires 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.206+0000 7f90ac91c640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.206+0000 7f90ac91c640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:19:35.376 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.206+0000 7f90ac91c640 0 mon.b@-1(synchronizing).osd e4 crush map has features 288514050185494528, adjusting msgr requires 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:31.513696+0000 mon.a (mon.0) 0 : cluster [INF] mkfs f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:31.513696+0000 mon.a (mon.0) 0 : cluster [INF] mkfs f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:31.504964+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:31.504964+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488494+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488494+0000 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0) 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488527+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488527+0000 mon.a (mon.0) 2 : cluster [DBG] monmap epoch 1 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488532+0000 mon.a (mon.0) 3 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488532+0000 mon.a (mon.0) 3 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488536+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488536+0000 mon.a (mon.0) 4 : cluster [DBG] last_changed 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488543+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488543+0000 mon.a (mon.0) 5 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488547+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488547+0000 mon.a (mon.0) 6 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488550+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488550+0000 mon.a (mon.0) 7 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488554+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488554+0000 mon.a (mon.0) 8 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488752+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488752+0000 mon.a (mon.0) 9 : cluster [DBG] fsmap 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488767+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.488767+0000 mon.a (mon.0) 10 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.489232+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:32.489232+0000 mon.a (mon.0) 11 : cluster [DBG] mgrmap e1: no daemons active 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:32.712498+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.103:0/2780831427' entity='client.admin' 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:32.712498+0000 mon.a (mon.0) 12 : audit [INF] from='client.? 192.168.123.103:0/2780831427' entity='client.admin' 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:33.269522+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.103:0/1660484640' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:33.269522+0000 mon.a (mon.0) 13 : audit [DBG] from='client.? 192.168.123.103:0/1660484640' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:35.511427+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.103:0/2523825932' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:35.511427+0000 mon.a (mon.0) 14 : audit [DBG] from='client.? 192.168.123.103:0/2523825932' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:36.068742+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:36.068742+0000 mon.a (mon.0) 15 : cluster [INF] Activating manager daemon a 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:36.072300+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.0036244s) 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:36.072300+0000 mon.a (mon.0) 16 : cluster [DBG] mgrmap e2: a(active, starting, since 0.0036244s) 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.073529+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.073529+0000 mon.a (mon.0) 17 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.073584+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.073584+0000 mon.a (mon.0) 18 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.073834+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.073834+0000 mon.a (mon.0) 19 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.073893+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.073893+0000 mon.a (mon.0) 20 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.073945+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.073945+0000 mon.a (mon.0) 21 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.073999+0000 mon.a (mon.0) 22 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.073999+0000 mon.a (mon.0) 22 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.074589+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.074589+0000 mon.a (mon.0) 23 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.075114+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.075114+0000 mon.a (mon.0) 24 : audit [DBG] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:36.078597+0000 mon.a (mon.0) 25 : cluster [INF] Manager daemon a is now available 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:36.078597+0000 mon.a (mon.0) 25 : cluster [INF] Manager daemon a is now available 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.086449+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.086449+0000 mon.a (mon.0) 26 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.089242+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:19:35.377 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.089242+0000 mon.a (mon.0) 27 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.089450+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.089450+0000 mon.a (mon.0) 28 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.091339+0000 mon.a (mon.0) 29 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.091339+0000 mon.a (mon.0) 29 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.093120+0000 mon.a (mon.0) 30 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:36.093120+0000 mon.a (mon.0) 30 : audit [INF] from='mgr.14102 192.168.123.103:0/2390161904' entity='mgr.a' 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:37.076795+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e3: a(active, since 1.00812s) 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:37.076795+0000 mon.a (mon.0) 31 : cluster [DBG] mgrmap e3: a(active, since 1.00812s) 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:37.794105+0000 mon.a (mon.0) 32 : audit [DBG] from='client.? 192.168.123.103:0/288382585' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:37.794105+0000 mon.a (mon.0) 32 : audit [DBG] from='client.? 192.168.123.103:0/288382585' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:38.040515+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.103:0/1974182671' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:38.040515+0000 mon.a (mon.0) 33 : audit [INF] from='client.? 192.168.123.103:0/1974182671' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:38.042727+0000 mon.a (mon.0) 34 : audit [INF] from='client.? 192.168.123.103:0/1974182671' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:38.042727+0000 mon.a (mon.0) 34 : audit [INF] from='client.? 192.168.123.103:0/1974182671' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:38.290171+0000 mon.a (mon.0) 35 : audit [INF] from='client.? 192.168.123.103:0/936723196' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:38.290171+0000 mon.a (mon.0) 35 : audit [INF] from='client.? 192.168.123.103:0/936723196' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:39.044376+0000 mon.a (mon.0) 36 : audit [INF] from='client.? 192.168.123.103:0/936723196' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:39.044376+0000 mon.a (mon.0) 36 : audit [INF] from='client.? 192.168.123.103:0/936723196' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:39.046414+0000 mon.a (mon.0) 37 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:39.046414+0000 mon.a (mon.0) 37 : cluster [DBG] mgrmap e4: a(active, since 2s) 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:39.350654+0000 mon.a (mon.0) 38 : audit [DBG] from='client.? 192.168.123.103:0/3285276427' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:39.350654+0000 mon.a (mon.0) 38 : audit [DBG] from='client.? 192.168.123.103:0/3285276427' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:41.960535+0000 mon.a (mon.0) 39 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:41.960535+0000 mon.a (mon.0) 39 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:41.960722+0000 mon.a (mon.0) 40 : cluster [INF] Activating manager daemon a 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:41.960722+0000 mon.a (mon.0) 40 : cluster [INF] Activating manager daemon a 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:41.965139+0000 mon.a (mon.0) 41 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:41.965139+0000 mon.a (mon.0) 41 : cluster [DBG] osdmap e2: 0 total, 0 up, 0 in 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:41.965278+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00463008s) 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:41.965278+0000 mon.a (mon.0) 42 : cluster [DBG] mgrmap e5: a(active, starting, since 0.00463008s) 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.967601+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.967601+0000 mon.a (mon.0) 43 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.967748+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.967748+0000 mon.a (mon.0) 44 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.968320+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.968320+0000 mon.a (mon.0) 45 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.968438+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.968438+0000 mon.a (mon.0) 46 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.968587+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.968587+0000 mon.a (mon.0) 47 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:41.972913+0000 mon.a (mon.0) 48 : cluster [INF] Manager daemon a is now available 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:41.972913+0000 mon.a (mon.0) 48 : cluster [INF] Manager daemon a is now available 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.981206+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.981206+0000 mon.a (mon.0) 49 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.984297+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.984297+0000 mon.a (mon.0) 50 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.994163+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.994163+0000 mon.a (mon.0) 51 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.994435+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.994435+0000 mon.a (mon.0) 52 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.995360+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:41.995360+0000 mon.a (mon.0) 53 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:42.002675+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:42.002675+0000 mon.a (mon.0) 54 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:41.978536+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:41.978536+0000 mgr.a (mgr.14118) 1 : cephadm [INF] Found migration_current of "None". Setting to last migration. 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:42.969217+0000 mon.a (mon.0) 55 : cluster [DBG] mgrmap e6: a(active, since 1.00857s) 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:42.969217+0000 mon.a (mon.0) 55 : cluster [DBG] mgrmap e6: a(active, since 1.00857s) 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:42.970034+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:19:35.378 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:42.970034+0000 mgr.a (mgr.14118) 2 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:42.974105+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:42.974105+0000 mgr.a (mgr.14118) 3 : audit [DBG] from='client.14122 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.245897+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.245897+0000 mon.a (mon.0) 56 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.255451+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.255451+0000 mon.a (mon.0) 57 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.400912+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.400912+0000 mon.a (mon.0) 58 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.403394+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.403394+0000 mon.a (mon.0) 59 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.859745+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.859745+0000 mon.a (mon.0) 60 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.862749+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.862749+0000 mon.a (mon.0) 61 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.241366+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.241366+0000 mgr.a (mgr.14118) 4 : audit [DBG] from='client.14130 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.533912+0000 mgr.a (mgr.14118) 5 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.533912+0000 mgr.a (mgr.14118) 5 : audit [DBG] from='client.14132 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "root", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.842921+0000 mgr.a (mgr.14118) 6 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:43.842921+0000 mgr.a (mgr.14118) 6 : audit [DBG] from='client.14134 -' entity='client.admin' cmd=[{"prefix": "cephadm generate-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:43.843137+0000 mgr.a (mgr.14118) 7 : cephadm [INF] Generating ssh key... 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:43.843137+0000 mgr.a (mgr.14118) 7 : cephadm [INF] Generating ssh key... 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:44.420368+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:44.420368+0000 mon.a (mon.0) 62 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:44.867659+0000 mon.a (mon.0) 63 : cluster [DBG] mgrmap e7: a(active, since 2s) 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:44.867659+0000 mon.a (mon.0) 63 : cluster [DBG] mgrmap e7: a(active, since 2s) 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:44.184615+0000 mgr.a (mgr.14118) 8 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:44.184615+0000 mgr.a (mgr.14118) 8 : audit [DBG] from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:44.210040+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Bus STARTING 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:44.210040+0000 mgr.a (mgr.14118) 9 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Bus STARTING 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:44.311449+0000 mgr.a (mgr.14118) 10 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:44.311449+0000 mgr.a (mgr.14118) 10 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:44.419682+0000 mgr.a (mgr.14118) 11 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:44.419682+0000 mgr.a (mgr.14118) 11 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:44.419729+0000 mgr.a (mgr.14118) 12 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Bus STARTED 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:44.419729+0000 mgr.a (mgr.14118) 12 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Bus STARTED 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:44.420341+0000 mgr.a (mgr.14118) 13 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Client ('192.168.123.103', 59018) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:44.420341+0000 mgr.a (mgr.14118) 13 : cephadm [INF] [09/Mar/2026:20:18:44] ENGINE Client ('192.168.123.103', 59018) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:44.449710+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:44.449710+0000 mgr.a (mgr.14118) 14 : audit [DBG] from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm03", "addr": "192.168.123.103", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:44.990494+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:44.990494+0000 mgr.a (mgr.14118) 15 : cephadm [INF] Deploying cephadm binary to vm03 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:46.234344+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:46.234344+0000 mon.a (mon.0) 64 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:46.234694+0000 mgr.a (mgr.14118) 16 : cephadm [INF] Added host vm03 2026-03-09T20:19:35.379 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:46.234694+0000 mgr.a (mgr.14118) 16 : cephadm [INF] Added host vm03 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:46.235661+0000 mon.a (mon.0) 65 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:46.235661+0000 mon.a (mon.0) 65 : audit [DBG] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:46.521993+0000 mgr.a (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:46.521993+0000 mgr.a (mgr.14118) 17 : audit [DBG] from='client.14140 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:46.522859+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:46.522859+0000 mgr.a (mgr.14118) 18 : cephadm [INF] Saving service mon spec with placement count:5 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:46.527027+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:46.527027+0000 mon.a (mon.0) 66 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:46.768856+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:46.768856+0000 mgr.a (mgr.14118) 19 : audit [DBG] from='client.14142 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "unmanaged": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:46.769511+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:46.769511+0000 mgr.a (mgr.14118) 20 : cephadm [INF] Saving service mgr spec with placement count:2 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:46.771854+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:46.771854+0000 mon.a (mon.0) 67 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:47.009342+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.103:0/776762689' entity='client.admin' 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:47.009342+0000 mon.a (mon.0) 68 : audit [INF] from='client.? 192.168.123.103:0/776762689' entity='client.admin' 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:47.250597+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.103:0/1643594369' entity='client.admin' 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:47.250597+0000 mon.a (mon.0) 69 : audit [INF] from='client.? 192.168.123.103:0/1643594369' entity='client.admin' 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:47.532930+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.103:0/2743274158' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:47.532930+0000 mon.a (mon.0) 70 : audit [INF] from='client.? 192.168.123.103:0/2743274158' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:47.657547+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:47.657547+0000 mon.a (mon.0) 71 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:47.918753+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:47.918753+0000 mon.a (mon.0) 72 : audit [INF] from='mgr.14118 192.168.123.103:0/1353242874' entity='mgr.a' 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:48.255996+0000 mon.a (mon.0) 73 : audit [INF] from='client.? 192.168.123.103:0/2743274158' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:48.255996+0000 mon.a (mon.0) 73 : audit [INF] from='client.? 192.168.123.103:0/2743274158' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:48.259467+0000 mon.a (mon.0) 74 : cluster [DBG] mgrmap e8: a(active, since 6s) 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:48.259467+0000 mon.a (mon.0) 74 : cluster [DBG] mgrmap e8: a(active, since 6s) 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:48.562228+0000 mon.a (mon.0) 75 : audit [DBG] from='client.? 192.168.123.103:0/446170918' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:48.562228+0000 mon.a (mon.0) 75 : audit [DBG] from='client.? 192.168.123.103:0/446170918' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:51.194813+0000 mon.a (mon.0) 76 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:51.194813+0000 mon.a (mon.0) 76 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:51.195258+0000 mon.a (mon.0) 77 : cluster [INF] Activating manager daemon a 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:51.195258+0000 mon.a (mon.0) 77 : cluster [INF] Activating manager daemon a 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:51.200372+0000 mon.a (mon.0) 78 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:51.200372+0000 mon.a (mon.0) 78 : cluster [DBG] osdmap e3: 0 total, 0 up, 0 in 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:51.200503+0000 mon.a (mon.0) 79 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00534097s) 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:51.200503+0000 mon.a (mon.0) 79 : cluster [DBG] mgrmap e9: a(active, starting, since 0.00534097s) 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.202704+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.202704+0000 mon.a (mon.0) 80 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.203030+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.203030+0000 mon.a (mon.0) 81 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.203826+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.203826+0000 mon.a (mon.0) 82 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.204155+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.204155+0000 mon.a (mon.0) 83 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.204470+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.204470+0000 mon.a (mon.0) 84 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:51.209854+0000 mon.a (mon.0) 85 : cluster [INF] Manager daemon a is now available 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:51.209854+0000 mon.a (mon.0) 85 : cluster [INF] Manager daemon a is now available 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.226412+0000 mon.a (mon.0) 86 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.226412+0000 mon.a (mon.0) 86 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.241984+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.241984+0000 mon.a (mon.0) 87 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.244288+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:51.244288+0000 mon.a (mon.0) 88 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:52.031305+0000 mgr.a (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Bus STARTING 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:52.031305+0000 mgr.a (mgr.14150) 1 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Bus STARTING 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:52.139156+0000 mgr.a (mgr.14150) 2 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:19:35.380 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:52.139156+0000 mgr.a (mgr.14150) 2 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:52.139627+0000 mgr.a (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Client ('192.168.123.103', 33438) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:52.139627+0000 mgr.a (mgr.14150) 3 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Client ('192.168.123.103', 33438) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:52.203151+0000 mon.a (mon.0) 89 : cluster [DBG] mgrmap e10: a(active, since 1.00799s) 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:52.203151+0000 mon.a (mon.0) 89 : cluster [DBG] mgrmap e10: a(active, since 1.00799s) 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:52.203763+0000 mgr.a (mgr.14150) 4 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:52.203763+0000 mgr.a (mgr.14150) 4 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:52.207504+0000 mgr.a (mgr.14150) 5 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:52.207504+0000 mgr.a (mgr.14150) 5 : audit [DBG] from='client.14154 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:52.240770+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:52.240770+0000 mgr.a (mgr.14150) 6 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:52.240821+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Bus STARTED 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:18:52.240821+0000 mgr.a (mgr.14150) 7 : cephadm [INF] [09/Mar/2026:20:18:52] ENGINE Bus STARTED 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:52.470914+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:52.470914+0000 mon.a (mon.0) 90 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:52.472985+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:52.472985+0000 mon.a (mon.0) 91 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:52.881473+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:52.881473+0000 mon.a (mon.0) 92 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:53.120978+0000 mon.a (mon.0) 93 : audit [DBG] from='client.? 192.168.123.103:0/1001944308' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:53.120978+0000 mon.a (mon.0) 93 : audit [DBG] from='client.? 192.168.123.103:0/1001944308' entity='client.admin' cmd=[{"prefix": "config get", "who": "mgr", "key": "mgr/dashboard/ssl_server_port"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:52.444429+0000 mgr.a (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:52.444429+0000 mgr.a (mgr.14150) 8 : audit [DBG] from='client.14162 -' entity='client.admin' cmd=[{"prefix": "dashboard create-self-signed-cert", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:52.722648+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:52.722648+0000 mgr.a (mgr.14150) 9 : audit [DBG] from='client.14164 -' entity='client.admin' cmd=[{"prefix": "dashboard ac-user-create", "username": "admin", "rolename": "administrator", "force_password": true, "pwd_update_required": true, "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:53.431648+0000 mon.a (mon.0) 94 : audit [INF] from='client.? 192.168.123.103:0/242845642' entity='client.admin' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:53.431648+0000 mon.a (mon.0) 94 : audit [INF] from='client.? 192.168.123.103:0/242845642' entity='client.admin' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:53.885903+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:53.885903+0000 mon.a (mon.0) 95 : cluster [DBG] mgrmap e11: a(active, since 2s) 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:55.809539+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:55.809539+0000 mon.a (mon.0) 96 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:56.389070+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:56.389070+0000 mon.a (mon.0) 97 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:57.816412+0000 mon.a (mon.0) 98 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:18:57.816412+0000 mon.a (mon.0) 98 : cluster [DBG] mgrmap e12: a(active, since 6s) 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:57.921012+0000 mon.a (mon.0) 99 : audit [INF] from='client.? 192.168.123.103:0/1330021691' entity='client.admin' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:18:57.921012+0000 mon.a (mon.0) 99 : audit [INF] from='client.? 192.168.123.103:0/1330021691' entity='client.admin' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.122120+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.122120+0000 mon.a (mon.0) 100 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.124456+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.124456+0000 mon.a (mon.0) 101 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.125087+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.125087+0000 mon.a (mon.0) 102 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.127397+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.127397+0000 mon.a (mon.0) 103 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.132403+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.132403+0000 mon.a (mon.0) 104 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.134844+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.134844+0000 mon.a (mon.0) 105 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.847363+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.847363+0000 mon.a (mon.0) 106 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.847997+0000 mon.a (mon.0) 107 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.847997+0000 mon.a (mon.0) 107 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.848900+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.848900+0000 mon.a (mon.0) 108 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.849282+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.849282+0000 mon.a (mon.0) 109 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.993408+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.993408+0000 mon.a (mon.0) 110 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.997180+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.997180+0000 mon.a (mon.0) 111 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.999948+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.999948+0000 mon.a (mon.0) 112 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.381 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.844517+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:02.844517+0000 mgr.a (mgr.14150) 10 : audit [DBG] from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "*", "mode": "0755", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:02.849898+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:02.849898+0000 mgr.a (mgr.14150) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:02.885329+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:02.885329+0000 mgr.a (mgr.14150) 12 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:02.922640+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:02.922640+0000 mgr.a (mgr.14150) 13 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:02.959749+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:02.959749+0000 mgr.a (mgr.14150) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:07.862529+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:07.862529+0000 mgr.a (mgr.14150) 15 : audit [DBG] from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm04", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:08.408237+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:08.408237+0000 mgr.a (mgr.14150) 16 : cephadm [INF] Deploying cephadm binary to vm04 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:09.687914+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:09.687914+0000 mon.a (mon.0) 113 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:09.688455+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm04 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:09.688455+0000 mgr.a (mgr.14150) 17 : cephadm [INF] Added host vm04 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:09.688783+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:09.688783+0000 mon.a (mon.0) 114 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:09.973275+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:09.973275+0000 mon.a (mon.0) 115 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:11.205275+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:11.205275+0000 mgr.a (mgr.14150) 18 : cluster [DBG] pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:11.238670+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:11.238670+0000 mon.a (mon.0) 116 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:11.765459+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:11.765459+0000 mon.a (mon.0) 117 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:13.205533+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:13.205533+0000 mgr.a (mgr.14150) 19 : cluster [DBG] pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.431704+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.431704+0000 mon.a (mon.0) 118 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.434442+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.434442+0000 mon.a (mon.0) 119 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.440494+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.440494+0000 mon.a (mon.0) 120 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.442394+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.442394+0000 mon.a (mon.0) 121 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.442878+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.442878+0000 mon.a (mon.0) 122 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.443440+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.443440+0000 mon.a (mon.0) 123 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.443824+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.443824+0000 mon.a (mon.0) 124 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:14.444380+0000 mgr.a (mgr.14150) 20 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:14.444380+0000 mgr.a (mgr.14150) 20 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:14.478358+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:14.478358+0000 mgr.a (mgr.14150) 21 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:14.511747+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:14.511747+0000 mgr.a (mgr.14150) 22 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:14.540979+0000 mgr.a (mgr.14150) 23 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:14.540979+0000 mgr.a (mgr.14150) 23 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.570469+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.570469+0000 mon.a (mon.0) 125 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.572527+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.572527+0000 mon.a (mon.0) 126 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.382 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.574471+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.574471+0000 mon.a (mon.0) 127 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.645168+0000 mgr.a (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:14.645168+0000 mgr.a (mgr.14150) 24 : audit [DBG] from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:15.205759+0000 mgr.a (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:15.205759+0000 mgr.a (mgr.14150) 25 : cluster [DBG] pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:17.205974+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:17.205974+0000 mgr.a (mgr.14150) 26 : cluster [DBG] pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:18.640579+0000 mgr.a (mgr.14150) 27 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:18.640579+0000 mgr.a (mgr.14150) 27 : audit [DBG] from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "vm08", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:19.163019+0000 mgr.a (mgr.14150) 28 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:19.163019+0000 mgr.a (mgr.14150) 28 : cephadm [INF] Deploying cephadm binary to vm08 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:19.206180+0000 mgr.a (mgr.14150) 29 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:19.206180+0000 mgr.a (mgr.14150) 29 : cluster [DBG] pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:20.404118+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:20.404118+0000 mon.a (mon.0) 128 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:20.404378+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Added host vm08 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:20.404378+0000 mgr.a (mgr.14150) 30 : cephadm [INF] Added host vm08 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:20.404599+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:20.404599+0000 mon.a (mon.0) 129 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:20.710420+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:20.710420+0000 mon.a (mon.0) 130 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:21.206339+0000 mgr.a (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:21.206339+0000 mgr.a (mgr.14150) 31 : cluster [DBG] pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:21.976042+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:21.976042+0000 mon.a (mon.0) 131 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:22.540072+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:22.540072+0000 mon.a (mon.0) 132 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:23.206501+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:23.206501+0000 mgr.a (mgr.14150) 32 : cluster [DBG] pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:25.206689+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:25.206689+0000 mgr.a (mgr.14150) 33 : cluster [DBG] pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.314711+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.314711+0000 mon.a (mon.0) 133 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.316608+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.316608+0000 mon.a (mon.0) 134 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.318926+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.318926+0000 mon.a (mon.0) 135 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.320485+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.320485+0000 mon.a (mon.0) 136 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.320864+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.320864+0000 mon.a (mon.0) 137 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm08", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.321399+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.321399+0000 mon.a (mon.0) 138 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.321741+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.321741+0000 mon.a (mon.0) 139 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:25.322295+0000 mgr.a (mgr.14150) 34 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:25.322295+0000 mgr.a (mgr.14150) 34 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:25.352154+0000 mgr.a (mgr.14150) 35 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:25.352154+0000 mgr.a (mgr.14150) 35 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.354551+0000 mgr.a (mgr.14150) 36 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.354551+0000 mgr.a (mgr.14150) 36 : audit [DBG] from='client.14180 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.459565+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.459565+0000 mon.a (mon.0) 140 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.462187+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.462187+0000 mon.a (mon.0) 141 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.464379+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:25.464379+0000 mon.a (mon.0) 142 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:25.387821+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:25.387821+0000 mgr.a (mgr.14150) 37 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:35.383 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:25.422186+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:25.422186+0000 mgr.a (mgr.14150) 38 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:27.206852+0000 mgr.a (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:27.206852+0000 mgr.a (mgr.14150) 39 : cluster [DBG] pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:29.207096+0000 mgr.a (mgr.14150) 40 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:29.207096+0000 mgr.a (mgr.14150) 40 : cluster [DBG] pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:29.372151+0000 mon.a (mon.0) 143 : audit [INF] from='client.? 192.168.123.103:0/1213967800' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:29.372151+0000 mon.a (mon.0) 143 : audit [INF] from='client.? 192.168.123.103:0/1213967800' entity='client.admin' cmd=[{"prefix": "osd crush tunables", "profile": "default"}]: dispatch 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:30.322043+0000 mon.a (mon.0) 144 : audit [INF] from='client.? 192.168.123.103:0/1213967800' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:30.322043+0000 mon.a (mon.0) 144 : audit [INF] from='client.? 192.168.123.103:0/1213967800' entity='client.admin' cmd='[{"prefix": "osd crush tunables", "profile": "default"}]': finished 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:30.323901+0000 mon.a (mon.0) 145 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:30.323901+0000 mon.a (mon.0) 145 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:31.207264+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:31.207264+0000 mgr.a (mgr.14150) 41 : cluster [DBG] pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.910091+0000 mgr.a (mgr.14150) 42 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm08:192.168.123.108=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.910091+0000 mgr.a (mgr.14150) 42 : audit [DBG] from='client.14184 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "placement": "3;vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm08:192.168.123.108=c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:31.911368+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Saving service mon spec with placement vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm08:192.168.123.108=c;count:3 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:31.911368+0000 mgr.a (mgr.14150) 43 : cephadm [INF] Saving service mon spec with placement vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm08:192.168.123.108=c;count:3 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.919771+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.919771+0000 mon.a (mon.0) 146 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.920479+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.920479+0000 mon.a (mon.0) 147 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.921530+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.921530+0000 mon.a (mon.0) 148 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.921998+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.921998+0000 mon.a (mon.0) 149 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.929885+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.929885+0000 mon.a (mon.0) 150 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.931043+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.931043+0000 mon.a (mon.0) 151 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.931471+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: audit 2026-03-09T20:19:31.931471+0000 mon.a (mon.0) 152 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:31.931986+0000 mgr.a (mgr.14150) 44 : cephadm [INF] Deploying daemon mon.c on vm08 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cephadm 2026-03-09T20:19:31.931986+0000 mgr.a (mgr.14150) 44 : cephadm [INF] Deploying daemon mon.c on vm08 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:33.207434+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: cluster 2026-03-09T20:19:33.207434+0000 mgr.a (mgr.14150) 45 : cluster [DBG] pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:35.384 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:35 vm04 bash[22793]: debug 2026-03-09T20:19:35.226+0000 7f90ac91c640 1 mon.b@-1(synchronizing).paxosservice(auth 1..3) refresh upgraded, format 0 -> 3 2026-03-09T20:19:38.687 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:19:38.687 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":2,"fsid":"f72c9476-1bf4-11f1-9f3a-7162c3a72a6d","modified":"2026-03-09T20:19:33.674690Z","created":"2026-03-09T20:18:30.276494Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:3300","nonce":0},{"type":"v1","addr":"192.168.123.108:6789","nonce":0}]},"addr":"192.168.123.108:6789/0","public_addr":"192.168.123.108:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1]} 2026-03-09T20:19:38.687 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 2 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cephadm 2026-03-09T20:19:33.495086+0000 mgr.a (mgr.14150) 46 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cephadm 2026-03-09T20:19:33.495086+0000 mgr.a (mgr.14150) 46 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:33.677237+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:33.677237+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:33.677330+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:33.677330+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:33.677592+0000 mon.a (mon.0) 161 : cluster [INF] mon.a calling monitor election 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:33.677592+0000 mon.a (mon.0) 161 : cluster [INF] mon.a calling monitor election 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:33.725611+0000 mon.a (mon.0) 162 : audit [DBG] from='client.? 192.168.123.108:0/1934071745' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:33.725611+0000 mon.a (mon.0) 162 : audit [DBG] from='client.? 192.168.123.108:0/1934071745' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:34.673643+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:34.673643+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:35.207576+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:35.207576+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:35.235328+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:35.235328+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:35.673829+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:35.673829+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:35.678100+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:35.678100+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:36.234962+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:36.234962+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:36.673765+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:36.673765+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:37.207741+0000 mgr.a (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:37.207741+0000 mgr.a (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:37.235168+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:37.235168+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:37.673967+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:37.673967+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:38.235368+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:38.235368+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:38.674020+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:38.674020+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.682079+0000 mon.a (mon.0) 172 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.682079+0000 mon.a (mon.0) 172 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686443+0000 mon.a (mon.0) 173 : cluster [DBG] monmap epoch 2 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686443+0000 mon.a (mon.0) 173 : cluster [DBG] monmap epoch 2 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686460+0000 mon.a (mon.0) 174 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686460+0000 mon.a (mon.0) 174 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686470+0000 mon.a (mon.0) 175 : cluster [DBG] last_changed 2026-03-09T20:19:33.674690+0000 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686470+0000 mon.a (mon.0) 175 : cluster [DBG] last_changed 2026-03-09T20:19:33.674690+0000 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686480+0000 mon.a (mon.0) 176 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686480+0000 mon.a (mon.0) 176 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686491+0000 mon.a (mon.0) 177 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686491+0000 mon.a (mon.0) 177 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686501+0000 mon.a (mon.0) 178 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686501+0000 mon.a (mon.0) 178 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686511+0000 mon.a (mon.0) 179 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686511+0000 mon.a (mon.0) 179 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686521+0000 mon.a (mon.0) 180 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686521+0000 mon.a (mon.0) 180 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686765+0000 mon.a (mon.0) 181 : cluster [DBG] fsmap 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686765+0000 mon.a (mon.0) 181 : cluster [DBG] fsmap 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686785+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686785+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686907+0000 mon.a (mon.0) 183 : cluster [DBG] mgrmap e12: a(active, since 47s) 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.686907+0000 mon.a (mon.0) 183 : cluster [DBG] mgrmap e12: a(active, since 47s) 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.687015+0000 mon.a (mon.0) 184 : cluster [INF] overall HEALTH_OK 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: cluster 2026-03-09T20:19:38.687015+0000 mon.a (mon.0) 184 : cluster [INF] overall HEALTH_OK 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:38.691560+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:38.691560+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:38.696240+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:38.696240+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:38.702801+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:38.702801+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:38.708008+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:38.708008+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:38.721865+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:39.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:38 vm08 bash[23232]: audit 2026-03-09T20:19:38.721865+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cephadm 2026-03-09T20:19:33.495086+0000 mgr.a (mgr.14150) 46 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cephadm 2026-03-09T20:19:33.495086+0000 mgr.a (mgr.14150) 46 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:33.677237+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:33.677237+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:33.677330+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:33.677330+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:33.677592+0000 mon.a (mon.0) 161 : cluster [INF] mon.a calling monitor election 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:33.677592+0000 mon.a (mon.0) 161 : cluster [INF] mon.a calling monitor election 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:33.725611+0000 mon.a (mon.0) 162 : audit [DBG] from='client.? 192.168.123.108:0/1934071745' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:33.725611+0000 mon.a (mon.0) 162 : audit [DBG] from='client.? 192.168.123.108:0/1934071745' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:34.673643+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:34.673643+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:35.207576+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:35.207576+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:35.235328+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:35.235328+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:35.673829+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:35.673829+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:35.678100+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:35.678100+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:36.234962+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:36.234962+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:36.673765+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:36.673765+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:37.207741+0000 mgr.a (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:37.207741+0000 mgr.a (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:37.235168+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:37.235168+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:37.673967+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:37.673967+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:38.235368+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:38.235368+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:39.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:38.674020+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:38.674020+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.682079+0000 mon.a (mon.0) 172 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.682079+0000 mon.a (mon.0) 172 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686443+0000 mon.a (mon.0) 173 : cluster [DBG] monmap epoch 2 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686443+0000 mon.a (mon.0) 173 : cluster [DBG] monmap epoch 2 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686460+0000 mon.a (mon.0) 174 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686460+0000 mon.a (mon.0) 174 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686470+0000 mon.a (mon.0) 175 : cluster [DBG] last_changed 2026-03-09T20:19:33.674690+0000 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686470+0000 mon.a (mon.0) 175 : cluster [DBG] last_changed 2026-03-09T20:19:33.674690+0000 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686480+0000 mon.a (mon.0) 176 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686480+0000 mon.a (mon.0) 176 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686491+0000 mon.a (mon.0) 177 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686491+0000 mon.a (mon.0) 177 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686501+0000 mon.a (mon.0) 178 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686501+0000 mon.a (mon.0) 178 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686511+0000 mon.a (mon.0) 179 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686511+0000 mon.a (mon.0) 179 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686521+0000 mon.a (mon.0) 180 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686521+0000 mon.a (mon.0) 180 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686765+0000 mon.a (mon.0) 181 : cluster [DBG] fsmap 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686765+0000 mon.a (mon.0) 181 : cluster [DBG] fsmap 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686785+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686785+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686907+0000 mon.a (mon.0) 183 : cluster [DBG] mgrmap e12: a(active, since 47s) 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.686907+0000 mon.a (mon.0) 183 : cluster [DBG] mgrmap e12: a(active, since 47s) 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.687015+0000 mon.a (mon.0) 184 : cluster [INF] overall HEALTH_OK 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: cluster 2026-03-09T20:19:38.687015+0000 mon.a (mon.0) 184 : cluster [INF] overall HEALTH_OK 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:38.691560+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:38.691560+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:38.696240+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:38.696240+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:38.702801+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:38.702801+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:38.708008+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:38.708008+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:38.721865+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:39.158 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:38 vm03 bash[20708]: audit 2026-03-09T20:19:38.721865+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:39.758 INFO:tasks.cephadm:Waiting for 3 mons in monmap... 2026-03-09T20:19:39.758 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph mon dump -f json 2026-03-09T20:19:40.156 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:19:39 vm03 bash[20968]: debug 2026-03-09T20:19:39.671+0000 7f4bc7961640 -1 mgr.server handle_report got status from non-daemon mon.c 2026-03-09T20:19:43.491 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.c/config 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cephadm 2026-03-09T20:19:33.495086+0000 mgr.a (mgr.14150) 46 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cephadm 2026-03-09T20:19:33.495086+0000 mgr.a (mgr.14150) 46 : cephadm [INF] Deploying daemon mon.b on vm04 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:33.677237+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:33.677237+0000 mon.a (mon.0) 159 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:33.677330+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:33.677330+0000 mon.a (mon.0) 160 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:33.677592+0000 mon.a (mon.0) 161 : cluster [INF] mon.a calling monitor election 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:33.677592+0000 mon.a (mon.0) 161 : cluster [INF] mon.a calling monitor election 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:33.725611+0000 mon.a (mon.0) 162 : audit [DBG] from='client.? 192.168.123.108:0/1934071745' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:33.725611+0000 mon.a (mon.0) 162 : audit [DBG] from='client.? 192.168.123.108:0/1934071745' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:34.673643+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:34.673643+0000 mon.a (mon.0) 163 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:35.207576+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:35.207576+0000 mgr.a (mgr.14150) 47 : cluster [DBG] pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:35.235328+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:35.235328+0000 mon.a (mon.0) 164 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:35.673829+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:35.673829+0000 mon.a (mon.0) 165 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:35.678100+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:35.678100+0000 mon.c (mon.1) 1 : cluster [INF] mon.c calling monitor election 2026-03-09T20:19:44.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:36.234962+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:36.234962+0000 mon.a (mon.0) 166 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:36.673765+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:36.673765+0000 mon.a (mon.0) 167 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:37.207741+0000 mgr.a (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:37.207741+0000 mgr.a (mgr.14150) 48 : cluster [DBG] pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:37.235168+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:37.235168+0000 mon.a (mon.0) 168 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:37.673967+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:37.673967+0000 mon.a (mon.0) 169 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:38.235368+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:38.235368+0000 mon.a (mon.0) 170 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:38.674020+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:38.674020+0000 mon.a (mon.0) 171 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.682079+0000 mon.a (mon.0) 172 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.682079+0000 mon.a (mon.0) 172 : cluster [INF] mon.a is new leader, mons a,c in quorum (ranks 0,1) 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686443+0000 mon.a (mon.0) 173 : cluster [DBG] monmap epoch 2 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686443+0000 mon.a (mon.0) 173 : cluster [DBG] monmap epoch 2 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686460+0000 mon.a (mon.0) 174 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686460+0000 mon.a (mon.0) 174 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686470+0000 mon.a (mon.0) 175 : cluster [DBG] last_changed 2026-03-09T20:19:33.674690+0000 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686470+0000 mon.a (mon.0) 175 : cluster [DBG] last_changed 2026-03-09T20:19:33.674690+0000 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686480+0000 mon.a (mon.0) 176 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686480+0000 mon.a (mon.0) 176 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686491+0000 mon.a (mon.0) 177 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686491+0000 mon.a (mon.0) 177 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686501+0000 mon.a (mon.0) 178 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686501+0000 mon.a (mon.0) 178 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686511+0000 mon.a (mon.0) 179 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686511+0000 mon.a (mon.0) 179 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686521+0000 mon.a (mon.0) 180 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686521+0000 mon.a (mon.0) 180 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686765+0000 mon.a (mon.0) 181 : cluster [DBG] fsmap 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686765+0000 mon.a (mon.0) 181 : cluster [DBG] fsmap 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686785+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686785+0000 mon.a (mon.0) 182 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686907+0000 mon.a (mon.0) 183 : cluster [DBG] mgrmap e12: a(active, since 47s) 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.686907+0000 mon.a (mon.0) 183 : cluster [DBG] mgrmap e12: a(active, since 47s) 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.687015+0000 mon.a (mon.0) 184 : cluster [INF] overall HEALTH_OK 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:38.687015+0000 mon.a (mon.0) 184 : cluster [INF] overall HEALTH_OK 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:38.691560+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:38.691560+0000 mon.a (mon.0) 185 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:38.696240+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:38.696240+0000 mon.a (mon.0) 186 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:38.702801+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:38.702801+0000 mon.a (mon.0) 187 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:38.708008+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:38.708008+0000 mon.a (mon.0) 188 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:38.721865+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:38.721865+0000 mon.a (mon.0) 189 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:39.207915+0000 mgr.a (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:39.207915+0000 mgr.a (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:39.322574+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:39.322574+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:39.322749+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:39.322749+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:39.322863+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:39.322863+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:39.322991+0000 mon.a (mon.0) 194 : cluster [INF] mon.a calling monitor election 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:39.322991+0000 mon.a (mon.0) 194 : cluster [INF] mon.a calling monitor election 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:39.349596+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T20:19:44.621 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:39.349596+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:40.235640+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:40.235640+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:41.208101+0000 mgr.a (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:41.208101+0000 mgr.a (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:41.235632+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:41.235632+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:41.236390+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:41.236390+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:42.235865+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:42.235865+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:43.208305+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:43.208305+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:43.236114+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:43.236114+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.236297+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.236297+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.348232+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.348232+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351839+0000 mon.a (mon.0) 201 : cluster [DBG] monmap epoch 3 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351839+0000 mon.a (mon.0) 201 : cluster [DBG] monmap epoch 3 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351857+0000 mon.a (mon.0) 202 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351857+0000 mon.a (mon.0) 202 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351867+0000 mon.a (mon.0) 203 : cluster [DBG] last_changed 2026-03-09T20:19:39.236940+0000 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351867+0000 mon.a (mon.0) 203 : cluster [DBG] last_changed 2026-03-09T20:19:39.236940+0000 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351877+0000 mon.a (mon.0) 204 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351877+0000 mon.a (mon.0) 204 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351886+0000 mon.a (mon.0) 205 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351886+0000 mon.a (mon.0) 205 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351895+0000 mon.a (mon.0) 206 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351895+0000 mon.a (mon.0) 206 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351904+0000 mon.a (mon.0) 207 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351904+0000 mon.a (mon.0) 207 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351917+0000 mon.a (mon.0) 208 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351917+0000 mon.a (mon.0) 208 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351931+0000 mon.a (mon.0) 209 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.351931+0000 mon.a (mon.0) 209 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.352238+0000 mon.a (mon.0) 210 : cluster [DBG] fsmap 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.352238+0000 mon.a (mon.0) 210 : cluster [DBG] fsmap 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.352263+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.352263+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.352395+0000 mon.a (mon.0) 212 : cluster [DBG] mgrmap e12: a(active, since 53s) 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.352395+0000 mon.a (mon.0) 212 : cluster [DBG] mgrmap e12: a(active, since 53s) 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.352474+0000 mon.a (mon.0) 213 : cluster [INF] overall HEALTH_OK 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: cluster 2026-03-09T20:19:44.352474+0000 mon.a (mon.0) 213 : cluster [INF] overall HEALTH_OK 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.363575+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.363575+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.366988+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.366988+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.371231+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.371231+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.374976+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.374976+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.622 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.378763+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.623 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.378763+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.623 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.379279+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:44.623 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.379279+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:44.623 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.379828+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:44.623 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:44 vm04 bash[22793]: audit 2026-03-09T20:19:44.379828+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:39.207915+0000 mgr.a (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:39.207915+0000 mgr.a (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:39.322574+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:39.322574+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:39.322749+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:39.322749+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:39.322863+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:39.322863+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:39.322991+0000 mon.a (mon.0) 194 : cluster [INF] mon.a calling monitor election 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:39.322991+0000 mon.a (mon.0) 194 : cluster [INF] mon.a calling monitor election 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:39.349596+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:39.349596+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:40.235640+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:40.235640+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:41.208101+0000 mgr.a (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:41.208101+0000 mgr.a (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:41.235632+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:41.235632+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:41.236390+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:41.236390+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:42.235865+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:42.235865+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:43.208305+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:43.208305+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:43.236114+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:43.236114+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.236297+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.236297+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.348232+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.348232+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351839+0000 mon.a (mon.0) 201 : cluster [DBG] monmap epoch 3 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351839+0000 mon.a (mon.0) 201 : cluster [DBG] monmap epoch 3 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351857+0000 mon.a (mon.0) 202 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351857+0000 mon.a (mon.0) 202 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351867+0000 mon.a (mon.0) 203 : cluster [DBG] last_changed 2026-03-09T20:19:39.236940+0000 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351867+0000 mon.a (mon.0) 203 : cluster [DBG] last_changed 2026-03-09T20:19:39.236940+0000 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351877+0000 mon.a (mon.0) 204 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351877+0000 mon.a (mon.0) 204 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351886+0000 mon.a (mon.0) 205 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351886+0000 mon.a (mon.0) 205 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351895+0000 mon.a (mon.0) 206 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351895+0000 mon.a (mon.0) 206 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351904+0000 mon.a (mon.0) 207 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351904+0000 mon.a (mon.0) 207 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351917+0000 mon.a (mon.0) 208 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351917+0000 mon.a (mon.0) 208 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351931+0000 mon.a (mon.0) 209 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.351931+0000 mon.a (mon.0) 209 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.352238+0000 mon.a (mon.0) 210 : cluster [DBG] fsmap 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.352238+0000 mon.a (mon.0) 210 : cluster [DBG] fsmap 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.352263+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.352263+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.352395+0000 mon.a (mon.0) 212 : cluster [DBG] mgrmap e12: a(active, since 53s) 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.352395+0000 mon.a (mon.0) 212 : cluster [DBG] mgrmap e12: a(active, since 53s) 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.352474+0000 mon.a (mon.0) 213 : cluster [INF] overall HEALTH_OK 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: cluster 2026-03-09T20:19:44.352474+0000 mon.a (mon.0) 213 : cluster [INF] overall HEALTH_OK 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.363575+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.363575+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.366988+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.366988+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.371231+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.371231+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.374976+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.374976+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.378763+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.378763+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.379279+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.379279+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.379828+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:44.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:44 vm08 bash[23232]: audit 2026-03-09T20:19:44.379828+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:39.207915+0000 mgr.a (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:39.207915+0000 mgr.a (mgr.14150) 49 : cluster [DBG] pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:39.322574+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:39.322574+0000 mon.a (mon.0) 191 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:39.322749+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:39.322749+0000 mon.a (mon.0) 192 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:39.322863+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:39.322863+0000 mon.a (mon.0) 193 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:39.322991+0000 mon.a (mon.0) 194 : cluster [INF] mon.a calling monitor election 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:39.322991+0000 mon.a (mon.0) 194 : cluster [INF] mon.a calling monitor election 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:39.349596+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:39.349596+0000 mon.c (mon.1) 2 : cluster [INF] mon.c calling monitor election 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:40.235640+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:40.235640+0000 mon.a (mon.0) 195 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:41.208101+0000 mgr.a (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:41.208101+0000 mgr.a (mgr.14150) 50 : cluster [DBG] pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:41.235632+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:41.235632+0000 mon.a (mon.0) 196 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:41.236390+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:41.236390+0000 mon.b (mon.2) 1 : cluster [INF] mon.b calling monitor election 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:42.235865+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:42.235865+0000 mon.a (mon.0) 197 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:43.208305+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:43.208305+0000 mgr.a (mgr.14150) 51 : cluster [DBG] pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:43.236114+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:43.236114+0000 mon.a (mon.0) 198 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.236297+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.236297+0000 mon.a (mon.0) 199 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.348232+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.348232+0000 mon.a (mon.0) 200 : cluster [INF] mon.a is new leader, mons a,c,b in quorum (ranks 0,1,2) 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351839+0000 mon.a (mon.0) 201 : cluster [DBG] monmap epoch 3 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351839+0000 mon.a (mon.0) 201 : cluster [DBG] monmap epoch 3 2026-03-09T20:19:44.842 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351857+0000 mon.a (mon.0) 202 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351857+0000 mon.a (mon.0) 202 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351867+0000 mon.a (mon.0) 203 : cluster [DBG] last_changed 2026-03-09T20:19:39.236940+0000 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351867+0000 mon.a (mon.0) 203 : cluster [DBG] last_changed 2026-03-09T20:19:39.236940+0000 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351877+0000 mon.a (mon.0) 204 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351877+0000 mon.a (mon.0) 204 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351886+0000 mon.a (mon.0) 205 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351886+0000 mon.a (mon.0) 205 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351895+0000 mon.a (mon.0) 206 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351895+0000 mon.a (mon.0) 206 : cluster [DBG] election_strategy: 1 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351904+0000 mon.a (mon.0) 207 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351904+0000 mon.a (mon.0) 207 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351917+0000 mon.a (mon.0) 208 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351917+0000 mon.a (mon.0) 208 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351931+0000 mon.a (mon.0) 209 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.351931+0000 mon.a (mon.0) 209 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.352238+0000 mon.a (mon.0) 210 : cluster [DBG] fsmap 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.352238+0000 mon.a (mon.0) 210 : cluster [DBG] fsmap 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.352263+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.352263+0000 mon.a (mon.0) 211 : cluster [DBG] osdmap e4: 0 total, 0 up, 0 in 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.352395+0000 mon.a (mon.0) 212 : cluster [DBG] mgrmap e12: a(active, since 53s) 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.352395+0000 mon.a (mon.0) 212 : cluster [DBG] mgrmap e12: a(active, since 53s) 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.352474+0000 mon.a (mon.0) 213 : cluster [INF] overall HEALTH_OK 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: cluster 2026-03-09T20:19:44.352474+0000 mon.a (mon.0) 213 : cluster [INF] overall HEALTH_OK 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.363575+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.363575+0000 mon.a (mon.0) 214 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.366988+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.366988+0000 mon.a (mon.0) 215 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.371231+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.371231+0000 mon.a (mon.0) 216 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.374976+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.374976+0000 mon.a (mon.0) 217 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.378763+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.378763+0000 mon.a (mon.0) 218 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.379279+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.379279+0000 mon.a (mon.0) 219 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.379828+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:44.843 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:44 vm03 bash[20708]: audit 2026-03-09T20:19:44.379828+0000 mon.a (mon.0) 220 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:45.215 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:19:45.215 INFO:teuthology.orchestra.run.vm08.stdout:{"epoch":3,"fsid":"f72c9476-1bf4-11f1-9f3a-7162c3a72a6d","modified":"2026-03-09T20:19:39.236940Z","created":"2026-03-09T20:18:30.276494Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:3300","nonce":0},{"type":"v1","addr":"192.168.123.103:6789","nonce":0}]},"addr":"192.168.123.103:6789/0","public_addr":"192.168.123.103:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:3300","nonce":0},{"type":"v1","addr":"192.168.123.108:6789","nonce":0}]},"addr":"192.168.123.108:6789/0","public_addr":"192.168.123.108:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"b","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:3300","nonce":0},{"type":"v1","addr":"192.168.123.104:6789","nonce":0}]},"addr":"192.168.123.104:6789/0","public_addr":"192.168.123.104:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]} 2026-03-09T20:19:45.215 INFO:teuthology.orchestra.run.vm08.stderr:dumped monmap epoch 3 2026-03-09T20:19:45.295 INFO:tasks.cephadm:Generating final ceph.conf file... 2026-03-09T20:19:45.295 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph config generate-minimal-conf 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.380417+0000 mgr.a (mgr.14150) 52 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.380417+0000 mgr.a (mgr.14150) 52 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.380531+0000 mgr.a (mgr.14150) 53 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.380531+0000 mgr.a (mgr.14150) 53 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.380583+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.380583+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.443073+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.443073+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.452535+0000 mgr.a (mgr.14150) 56 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.452535+0000 mgr.a (mgr.14150) 56 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.452688+0000 mgr.a (mgr.14150) 57 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.452688+0000 mgr.a (mgr.14150) 57 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.514957+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.514957+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.519242+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.519242+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.523942+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.523942+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.528684+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.528684+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.533648+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.533648+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.538332+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.538332+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.542402+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.542402+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.560065+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.560065+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.563788+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.563788+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.567329+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.567329+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.570760+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.570760+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.571127+0000 mgr.a (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.571127+0000 mgr.a (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.571363+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.571363+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.571790+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.571790+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.572139+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.572139+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.572749+0000 mgr.a (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.572749+0000 mgr.a (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.960516+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.960516+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.977799+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.977799+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.978611+0000 mgr.a (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.978611+0000 mgr.a (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.978828+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.978828+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.979275+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.979275+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.979609+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:44.979609+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.980082+0000 mgr.a (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm04 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cephadm 2026-03-09T20:19:44.980082+0000 mgr.a (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm04 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cluster 2026-03-09T20:19:45.208474+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: cluster 2026-03-09T20:19:45.208474+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:45.215125+0000 mon.a (mon.0) 240 : audit [DBG] from='client.? 192.168.123.108:0/3034977549' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:45.215125+0000 mon.a (mon.0) 240 : audit [DBG] from='client.? 192.168.123.108:0/3034977549' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:45.238571+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:45.238571+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:45.375114+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:45.375114+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:45.381443+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:45.381443+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:45.382413+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:45.382413+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:45.382941+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:45.382941+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:45.383501+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:45 vm08 bash[23232]: audit 2026-03-09T20:19:45.383501+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.380417+0000 mgr.a (mgr.14150) 52 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:19:45.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.380417+0000 mgr.a (mgr.14150) 52 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.380531+0000 mgr.a (mgr.14150) 53 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.380531+0000 mgr.a (mgr.14150) 53 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.380583+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.380583+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.443073+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.443073+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.452535+0000 mgr.a (mgr.14150) 56 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.452535+0000 mgr.a (mgr.14150) 56 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.452688+0000 mgr.a (mgr.14150) 57 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.452688+0000 mgr.a (mgr.14150) 57 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.514957+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.514957+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.519242+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.519242+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.523942+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.523942+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.528684+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.528684+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.533648+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.533648+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.538332+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.538332+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.542402+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.542402+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.560065+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.560065+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.563788+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.563788+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.567329+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.567329+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.570760+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.570760+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.571127+0000 mgr.a (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.571127+0000 mgr.a (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.571363+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.571363+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.571790+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.571790+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.572139+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.572139+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.572749+0000 mgr.a (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.572749+0000 mgr.a (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.960516+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.960516+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.977799+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.977799+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.978611+0000 mgr.a (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.978611+0000 mgr.a (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T20:19:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.978828+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.978828+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.979275+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.979275+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.979609+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:44.979609+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.980082+0000 mgr.a (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm04 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cephadm 2026-03-09T20:19:44.980082+0000 mgr.a (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm04 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cluster 2026-03-09T20:19:45.208474+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: cluster 2026-03-09T20:19:45.208474+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:45.215125+0000 mon.a (mon.0) 240 : audit [DBG] from='client.? 192.168.123.108:0/3034977549' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:45.215125+0000 mon.a (mon.0) 240 : audit [DBG] from='client.? 192.168.123.108:0/3034977549' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:45.238571+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:45.238571+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:45.375114+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:45.375114+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:45.381443+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:45.381443+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:45.382413+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:45.382413+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:45.382941+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:45.382941+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:45.383501+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:45 vm03 bash[20708]: audit 2026-03-09T20:19:45.383501+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.380417+0000 mgr.a (mgr.14150) 52 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.380417+0000 mgr.a (mgr.14150) 52 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.380531+0000 mgr.a (mgr.14150) 53 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.380531+0000 mgr.a (mgr.14150) 53 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.380583+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.380583+0000 mgr.a (mgr.14150) 54 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.443073+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.443073+0000 mgr.a (mgr.14150) 55 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.452535+0000 mgr.a (mgr.14150) 56 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.452535+0000 mgr.a (mgr.14150) 56 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.452688+0000 mgr.a (mgr.14150) 57 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.452688+0000 mgr.a (mgr.14150) 57 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.514957+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.514957+0000 mon.a (mon.0) 221 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.519242+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.519242+0000 mon.a (mon.0) 222 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.523942+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.523942+0000 mon.a (mon.0) 223 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.528684+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.528684+0000 mon.a (mon.0) 224 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.533648+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.533648+0000 mon.a (mon.0) 225 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.538332+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.538332+0000 mon.a (mon.0) 226 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.542402+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.542402+0000 mon.a (mon.0) 227 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.560065+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.560065+0000 mon.a (mon.0) 228 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.563788+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.563788+0000 mon.a (mon.0) 229 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.567329+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.567329+0000 mon.a (mon.0) 230 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.570760+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.570760+0000 mon.a (mon.0) 231 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.571127+0000 mgr.a (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.571127+0000 mgr.a (mgr.14150) 58 : cephadm [INF] Reconfiguring mon.a (unknown last config time)... 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.571363+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.571363+0000 mon.a (mon.0) 232 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.571790+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.571790+0000 mon.a (mon.0) 233 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.572139+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.572139+0000 mon.a (mon.0) 234 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.572749+0000 mgr.a (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.572749+0000 mgr.a (mgr.14150) 59 : cephadm [INF] Reconfiguring daemon mon.a on vm03 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.960516+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.960516+0000 mon.a (mon.0) 235 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.977799+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.977799+0000 mon.a (mon.0) 236 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.978611+0000 mgr.a (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.978611+0000 mgr.a (mgr.14150) 60 : cephadm [INF] Reconfiguring mon.b (monmap changed)... 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.978828+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.978828+0000 mon.a (mon.0) 237 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.979275+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.979275+0000 mon.a (mon.0) 238 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.979609+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:44.979609+0000 mon.a (mon.0) 239 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.980082+0000 mgr.a (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm04 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cephadm 2026-03-09T20:19:44.980082+0000 mgr.a (mgr.14150) 61 : cephadm [INF] Reconfiguring daemon mon.b on vm04 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cluster 2026-03-09T20:19:45.208474+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: cluster 2026-03-09T20:19:45.208474+0000 mgr.a (mgr.14150) 62 : cluster [DBG] pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:45.215125+0000 mon.a (mon.0) 240 : audit [DBG] from='client.? 192.168.123.108:0/3034977549' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:45.215125+0000 mon.a (mon.0) 240 : audit [DBG] from='client.? 192.168.123.108:0/3034977549' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:45.238571+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:45.238571+0000 mon.a (mon.0) 241 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:45.375114+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:45.375114+0000 mon.a (mon.0) 242 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:45.381443+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:45.381443+0000 mon.a (mon.0) 243 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:45.382413+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:45.382413+0000 mon.a (mon.0) 244 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:45.382941+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:45.382941+0000 mon.a (mon.0) 245 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:45.383501+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:45.871 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:45 vm04 bash[22793]: audit 2026-03-09T20:19:45.383501+0000 mon.a (mon.0) 246 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:46.656 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20968]: debug 2026-03-09T20:19:46.231+0000 7f4bc7961640 -1 mgr.server handle_report got status from non-daemon mon.b 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: cephadm 2026-03-09T20:19:45.382206+0000 mgr.a (mgr.14150) 63 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: cephadm 2026-03-09T20:19:45.382206+0000 mgr.a (mgr.14150) 63 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: cephadm 2026-03-09T20:19:45.384052+0000 mgr.a (mgr.14150) 64 : cephadm [INF] Reconfiguring daemon mon.c on vm08 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: cephadm 2026-03-09T20:19:45.384052+0000 mgr.a (mgr.14150) 64 : cephadm [INF] Reconfiguring daemon mon.c on vm08 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: audit 2026-03-09T20:19:45.773559+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: audit 2026-03-09T20:19:45.773559+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: audit 2026-03-09T20:19:45.778332+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: audit 2026-03-09T20:19:45.778332+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: audit 2026-03-09T20:19:45.779310+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: audit 2026-03-09T20:19:45.779310+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: audit 2026-03-09T20:19:45.780397+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: audit 2026-03-09T20:19:45.780397+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: audit 2026-03-09T20:19:45.780857+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: audit 2026-03-09T20:19:45.780857+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: audit 2026-03-09T20:19:45.786882+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:46 vm08 bash[23232]: audit 2026-03-09T20:19:45.786882+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: cephadm 2026-03-09T20:19:45.382206+0000 mgr.a (mgr.14150) 63 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T20:19:47.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: cephadm 2026-03-09T20:19:45.382206+0000 mgr.a (mgr.14150) 63 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T20:19:47.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: cephadm 2026-03-09T20:19:45.384052+0000 mgr.a (mgr.14150) 64 : cephadm [INF] Reconfiguring daemon mon.c on vm08 2026-03-09T20:19:47.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: cephadm 2026-03-09T20:19:45.384052+0000 mgr.a (mgr.14150) 64 : cephadm [INF] Reconfiguring daemon mon.c on vm08 2026-03-09T20:19:47.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: audit 2026-03-09T20:19:45.773559+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: audit 2026-03-09T20:19:45.773559+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: audit 2026-03-09T20:19:45.778332+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: audit 2026-03-09T20:19:45.778332+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: audit 2026-03-09T20:19:45.779310+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:47.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: audit 2026-03-09T20:19:45.779310+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:47.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: audit 2026-03-09T20:19:45.780397+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:47.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: audit 2026-03-09T20:19:45.780397+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:47.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: audit 2026-03-09T20:19:45.780857+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:47.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: audit 2026-03-09T20:19:45.780857+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:47.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: audit 2026-03-09T20:19:45.786882+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:46 vm04 bash[22793]: audit 2026-03-09T20:19:45.786882+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: cephadm 2026-03-09T20:19:45.382206+0000 mgr.a (mgr.14150) 63 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: cephadm 2026-03-09T20:19:45.382206+0000 mgr.a (mgr.14150) 63 : cephadm [INF] Reconfiguring mon.c (monmap changed)... 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: cephadm 2026-03-09T20:19:45.384052+0000 mgr.a (mgr.14150) 64 : cephadm [INF] Reconfiguring daemon mon.c on vm08 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: cephadm 2026-03-09T20:19:45.384052+0000 mgr.a (mgr.14150) 64 : cephadm [INF] Reconfiguring daemon mon.c on vm08 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: audit 2026-03-09T20:19:45.773559+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: audit 2026-03-09T20:19:45.773559+0000 mon.a (mon.0) 247 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: audit 2026-03-09T20:19:45.778332+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: audit 2026-03-09T20:19:45.778332+0000 mon.a (mon.0) 248 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: audit 2026-03-09T20:19:45.779310+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: audit 2026-03-09T20:19:45.779310+0000 mon.a (mon.0) 249 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: audit 2026-03-09T20:19:45.780397+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: audit 2026-03-09T20:19:45.780397+0000 mon.a (mon.0) 250 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: audit 2026-03-09T20:19:45.780857+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: audit 2026-03-09T20:19:45.780857+0000 mon.a (mon.0) 251 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: audit 2026-03-09T20:19:45.786882+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:47.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:46 vm03 bash[20708]: audit 2026-03-09T20:19:45.786882+0000 mon.a (mon.0) 252 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:48.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:47 vm08 bash[23232]: cluster 2026-03-09T20:19:47.208668+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:48.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:47 vm08 bash[23232]: cluster 2026-03-09T20:19:47.208668+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:48.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:47 vm04 bash[22793]: cluster 2026-03-09T20:19:47.208668+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:48.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:47 vm04 bash[22793]: cluster 2026-03-09T20:19:47.208668+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:48.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:47 vm03 bash[20708]: cluster 2026-03-09T20:19:47.208668+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:48.156 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:47 vm03 bash[20708]: cluster 2026-03-09T20:19:47.208668+0000 mgr.a (mgr.14150) 65 : cluster [DBG] pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:49.919 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:19:50.181 INFO:teuthology.orchestra.run.vm03.stdout:# minimal ceph.conf for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:50.181 INFO:teuthology.orchestra.run.vm03.stdout:[global] 2026-03-09T20:19:50.181 INFO:teuthology.orchestra.run.vm03.stdout: fsid = f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:19:50.181 INFO:teuthology.orchestra.run.vm03.stdout: mon_host = [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] 2026-03-09T20:19:50.242 INFO:tasks.cephadm:Distributing (final) config and client.admin keyring... 2026-03-09T20:19:50.242 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T20:19:50.242 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T20:19:50.249 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T20:19:50.249 DEBUG:teuthology.orchestra.run.vm03:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:50.300 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T20:19:50.300 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T20:19:50.307 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T20:19:50.307 DEBUG:teuthology.orchestra.run.vm04:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:50.356 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T20:19:50.356 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.conf 2026-03-09T20:19:50.363 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T20:19:50.363 DEBUG:teuthology.orchestra.run.vm08:> sudo dd of=/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:19:50.412 INFO:tasks.cephadm:Adding mgr.a on vm03 2026-03-09T20:19:50.412 INFO:tasks.cephadm:Adding mgr.b on vm04 2026-03-09T20:19:50.412 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph orch apply mgr '2;vm03=a;vm04=b' 2026-03-09T20:19:50.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:50 vm08 bash[23232]: cluster 2026-03-09T20:19:49.208847+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:50.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:50 vm08 bash[23232]: cluster 2026-03-09T20:19:49.208847+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:50.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:50 vm08 bash[23232]: audit 2026-03-09T20:19:50.181208+0000 mon.a (mon.0) 253 : audit [DBG] from='client.? 192.168.123.103:0/2993739971' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:50.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:50 vm08 bash[23232]: audit 2026-03-09T20:19:50.181208+0000 mon.a (mon.0) 253 : audit [DBG] from='client.? 192.168.123.103:0/2993739971' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:50.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:50 vm04 bash[22793]: cluster 2026-03-09T20:19:49.208847+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:50.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:50 vm04 bash[22793]: cluster 2026-03-09T20:19:49.208847+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:50.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:50 vm04 bash[22793]: audit 2026-03-09T20:19:50.181208+0000 mon.a (mon.0) 253 : audit [DBG] from='client.? 192.168.123.103:0/2993739971' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:50.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:50 vm04 bash[22793]: audit 2026-03-09T20:19:50.181208+0000 mon.a (mon.0) 253 : audit [DBG] from='client.? 192.168.123.103:0/2993739971' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:50.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:50 vm03 bash[20708]: cluster 2026-03-09T20:19:49.208847+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:50.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:50 vm03 bash[20708]: cluster 2026-03-09T20:19:49.208847+0000 mgr.a (mgr.14150) 66 : cluster [DBG] pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:50.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:50 vm03 bash[20708]: audit 2026-03-09T20:19:50.181208+0000 mon.a (mon.0) 253 : audit [DBG] from='client.? 192.168.123.103:0/2993739971' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:50.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:50 vm03 bash[20708]: audit 2026-03-09T20:19:50.181208+0000 mon.a (mon.0) 253 : audit [DBG] from='client.? 192.168.123.103:0/2993739971' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:52.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:52 vm08 bash[23232]: cluster 2026-03-09T20:19:51.209004+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:52.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:52 vm08 bash[23232]: cluster 2026-03-09T20:19:51.209004+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:52.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:52 vm04 bash[22793]: cluster 2026-03-09T20:19:51.209004+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:52.653 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:52 vm04 bash[22793]: cluster 2026-03-09T20:19:51.209004+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:52.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:52 vm03 bash[20708]: cluster 2026-03-09T20:19:51.209004+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:52.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:52 vm03 bash[20708]: cluster 2026-03-09T20:19:51.209004+0000 mgr.a (mgr.14150) 67 : cluster [DBG] pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:54.060 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.c/config 2026-03-09T20:19:54.353 INFO:teuthology.orchestra.run.vm08.stdout:Scheduled mgr update... 2026-03-09T20:19:54.391 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:54 vm08 bash[23232]: cluster 2026-03-09T20:19:53.209168+0000 mgr.a (mgr.14150) 68 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:54.391 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:54 vm08 bash[23232]: cluster 2026-03-09T20:19:53.209168+0000 mgr.a (mgr.14150) 68 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:54.420 DEBUG:teuthology.orchestra.run.vm04:mgr.b> sudo journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mgr.b.service 2026-03-09T20:19:54.421 INFO:tasks.cephadm:Deploying OSDs... 2026-03-09T20:19:54.421 DEBUG:teuthology.orchestra.run.vm03:> set -ex 2026-03-09T20:19:54.421 DEBUG:teuthology.orchestra.run.vm03:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T20:19:54.424 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:19:54.424 DEBUG:teuthology.orchestra.run.vm03:> ls /dev/[sv]d? 2026-03-09T20:19:54.468 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vda 2026-03-09T20:19:54.468 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdb 2026-03-09T20:19:54.468 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdc 2026-03-09T20:19:54.468 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vdd 2026-03-09T20:19:54.468 INFO:teuthology.orchestra.run.vm03.stdout:/dev/vde 2026-03-09T20:19:54.468 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T20:19:54.468 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T20:19:54.469 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdb 2026-03-09T20:19:54.512 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdb 2026-03-09T20:19:54.512 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T20:19:54.512 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T20:19:54.512 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:19:54.512 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 20:13:10.838204468 +0000 2026-03-09T20:19:54.512 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 20:13:09.782204468 +0000 2026-03-09T20:19:54.512 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 20:13:09.782204468 +0000 2026-03-09T20:19:54.512 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-09T20:19:54.512 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T20:19:54.558 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:54 vm03 bash[20708]: cluster 2026-03-09T20:19:53.209168+0000 mgr.a (mgr.14150) 68 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:54.559 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:54 vm03 bash[20708]: cluster 2026-03-09T20:19:53.209168+0000 mgr.a (mgr.14150) 68 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:54.560 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T20:19:54.560 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T20:19:54.560 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000190766 s, 2.7 MB/s 2026-03-09T20:19:54.561 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T20:19:54.609 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdc 2026-03-09T20:19:54.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:54 vm04 bash[22793]: cluster 2026-03-09T20:19:53.209168+0000 mgr.a (mgr.14150) 68 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:54.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:54 vm04 bash[22793]: cluster 2026-03-09T20:19:53.209168+0000 mgr.a (mgr.14150) 68 : cluster [DBG] pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:54.656 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdc 2026-03-09T20:19:54.656 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T20:19:54.656 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T20:19:54.656 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:19:54.656 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 20:13:10.850204468 +0000 2026-03-09T20:19:54.656 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 20:13:09.782204468 +0000 2026-03-09T20:19:54.656 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 20:13:09.782204468 +0000 2026-03-09T20:19:54.656 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-09T20:19:54.656 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T20:19:54.703 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T20:19:54.703 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T20:19:54.703 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000171201 s, 3.0 MB/s 2026-03-09T20:19:54.704 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T20:19:54.748 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vdd 2026-03-09T20:19:54.791 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vdd 2026-03-09T20:19:54.791 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T20:19:54.791 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T20:19:54.791 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:19:54.791 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 20:13:10.838204468 +0000 2026-03-09T20:19:54.792 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 20:13:09.778204468 +0000 2026-03-09T20:19:54.792 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 20:13:09.778204468 +0000 2026-03-09T20:19:54.792 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-09T20:19:54.792 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T20:19:54.839 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T20:19:54.839 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T20:19:54.839 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000158646 s, 3.2 MB/s 2026-03-09T20:19:54.840 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T20:19:54.885 DEBUG:teuthology.orchestra.run.vm03:> stat /dev/vde 2026-03-09T20:19:54.932 INFO:teuthology.orchestra.run.vm03.stdout: File: /dev/vde 2026-03-09T20:19:54.932 INFO:teuthology.orchestra.run.vm03.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T20:19:54.932 INFO:teuthology.orchestra.run.vm03.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T20:19:54.932 INFO:teuthology.orchestra.run.vm03.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:19:54.932 INFO:teuthology.orchestra.run.vm03.stdout:Access: 2026-03-09 20:13:10.846204468 +0000 2026-03-09T20:19:54.932 INFO:teuthology.orchestra.run.vm03.stdout:Modify: 2026-03-09 20:13:09.830204468 +0000 2026-03-09T20:19:54.932 INFO:teuthology.orchestra.run.vm03.stdout:Change: 2026-03-09 20:13:09.830204468 +0000 2026-03-09T20:19:54.932 INFO:teuthology.orchestra.run.vm03.stdout: Birth: - 2026-03-09T20:19:54.932 DEBUG:teuthology.orchestra.run.vm03:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T20:19:54.980 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records in 2026-03-09T20:19:54.980 INFO:teuthology.orchestra.run.vm03.stderr:1+0 records out 2026-03-09T20:19:54.980 INFO:teuthology.orchestra.run.vm03.stderr:512 bytes copied, 0.000171632 s, 3.0 MB/s 2026-03-09T20:19:54.981 DEBUG:teuthology.orchestra.run.vm03:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T20:19:55.029 DEBUG:teuthology.orchestra.run.vm04:> set -ex 2026-03-09T20:19:55.029 DEBUG:teuthology.orchestra.run.vm04:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T20:19:55.032 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:19:55.032 DEBUG:teuthology.orchestra.run.vm04:> ls /dev/[sv]d? 2026-03-09T20:19:55.077 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vda 2026-03-09T20:19:55.077 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdb 2026-03-09T20:19:55.077 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdc 2026-03-09T20:19:55.077 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vdd 2026-03-09T20:19:55.077 INFO:teuthology.orchestra.run.vm04.stdout:/dev/vde 2026-03-09T20:19:55.077 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T20:19:55.077 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T20:19:55.077 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdb 2026-03-09T20:19:55.122 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdb 2026-03-09T20:19:55.122 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T20:19:55.122 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T20:19:55.122 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:19:55.122 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 20:12:39.733237319 +0000 2026-03-09T20:19:55.122 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 20:12:38.649237319 +0000 2026-03-09T20:19:55.122 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 20:12:38.649237319 +0000 2026-03-09T20:19:55.122 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-09T20:19:55.122 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T20:19:55.173 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T20:19:55.173 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T20:19:55.173 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:54 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:19:55.173 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:19:55.173 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:55 vm04 systemd[1]: Started Ceph mgr.b for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d. 2026-03-09T20:19:55.173 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000131576 s, 3.9 MB/s 2026-03-09T20:19:55.174 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T20:19:55.220 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdc 2026-03-09T20:19:55.268 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdc 2026-03-09T20:19:55.268 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T20:19:55.268 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T20:19:55.268 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:19:55.268 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 20:12:39.745237319 +0000 2026-03-09T20:19:55.268 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 20:12:38.637237319 +0000 2026-03-09T20:19:55.268 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 20:12:38.637237319 +0000 2026-03-09T20:19:55.268 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-09T20:19:55.270 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T20:19:55.334 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T20:19:55.334 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T20:19:55.334 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000178485 s, 2.9 MB/s 2026-03-09T20:19:55.335 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T20:19:55.395 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vdd 2026-03-09T20:19:55.444 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vdd 2026-03-09T20:19:55.444 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T20:19:55.444 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T20:19:55.444 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:19:55.444 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 20:12:39.733237319 +0000 2026-03-09T20:19:55.444 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 20:12:38.649237319 +0000 2026-03-09T20:19:55.444 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 20:12:38.649237319 +0000 2026-03-09T20:19:55.444 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-09T20:19:55.445 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T20:19:55.494 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.346919+0000 mgr.a (mgr.14150) 69 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=a;vm04=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:55.494 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.346919+0000 mgr.a (mgr.14150) 69 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=a;vm04=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:55.494 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: cephadm 2026-03-09T20:19:54.348030+0000 mgr.a (mgr.14150) 70 : cephadm [INF] Saving service mgr spec with placement vm03=a;vm04=b;count:2 2026-03-09T20:19:55.494 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: cephadm 2026-03-09T20:19:54.348030+0000 mgr.a (mgr.14150) 70 : cephadm [INF] Saving service mgr spec with placement vm03=a;vm04=b;count:2 2026-03-09T20:19:55.494 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.352760+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.494 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.352760+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.494 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.353963+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:55.494 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.353963+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.355295+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.355295+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.355802+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.355802+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.360376+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.360376+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.361939+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.361939+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.364265+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.364265+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.367262+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.367262+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.368033+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:54.368033+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:55.172422+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:55.172422+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:55.176333+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:55.176333+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:55.179802+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:55.179802+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:55.183023+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:55.183023+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:55.197943+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:55.495 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[22793]: audit 2026-03-09T20:19:55.197943+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:55.496 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[23235]: debug 2026-03-09T20:19:55.366+0000 7f48344d7140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T20:19:55.496 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[23235]: debug 2026-03-09T20:19:55.406+0000 7f48344d7140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T20:19:55.497 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T20:19:55.497 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T20:19:55.497 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000853778 s, 600 kB/s 2026-03-09T20:19:55.498 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T20:19:55.542 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[23235]: debug 2026-03-09T20:19:55.534+0000 7f48344d7140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T20:19:55.548 DEBUG:teuthology.orchestra.run.vm04:> stat /dev/vde 2026-03-09T20:19:55.593 INFO:teuthology.orchestra.run.vm04.stdout: File: /dev/vde 2026-03-09T20:19:55.594 INFO:teuthology.orchestra.run.vm04.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T20:19:55.594 INFO:teuthology.orchestra.run.vm04.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T20:19:55.594 INFO:teuthology.orchestra.run.vm04.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:19:55.594 INFO:teuthology.orchestra.run.vm04.stdout:Access: 2026-03-09 20:12:39.741237319 +0000 2026-03-09T20:19:55.594 INFO:teuthology.orchestra.run.vm04.stdout:Modify: 2026-03-09 20:12:38.669237319 +0000 2026-03-09T20:19:55.594 INFO:teuthology.orchestra.run.vm04.stdout:Change: 2026-03-09 20:12:38.669237319 +0000 2026-03-09T20:19:55.594 INFO:teuthology.orchestra.run.vm04.stdout: Birth: - 2026-03-09T20:19:55.594 DEBUG:teuthology.orchestra.run.vm04:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T20:19:55.641 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records in 2026-03-09T20:19:55.641 INFO:teuthology.orchestra.run.vm04.stderr:1+0 records out 2026-03-09T20:19:55.641 INFO:teuthology.orchestra.run.vm04.stderr:512 bytes copied, 0.000181249 s, 2.8 MB/s 2026-03-09T20:19:55.642 DEBUG:teuthology.orchestra.run.vm04:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T20:19:55.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.346919+0000 mgr.a (mgr.14150) 69 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=a;vm04=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:55.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.346919+0000 mgr.a (mgr.14150) 69 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=a;vm04=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:55.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: cephadm 2026-03-09T20:19:54.348030+0000 mgr.a (mgr.14150) 70 : cephadm [INF] Saving service mgr spec with placement vm03=a;vm04=b;count:2 2026-03-09T20:19:55.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: cephadm 2026-03-09T20:19:54.348030+0000 mgr.a (mgr.14150) 70 : cephadm [INF] Saving service mgr spec with placement vm03=a;vm04=b;count:2 2026-03-09T20:19:55.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.352760+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.352760+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.353963+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:55.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.353963+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:55.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.355295+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.355295+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.355802+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:55.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.355802+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:55.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.360376+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.360376+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.361939+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.361939+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.364265+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.364265+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.367262+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.367262+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.368033+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:54.368033+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:55.172422+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:55.172422+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:55.176333+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:55.176333+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:55.179802+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:55.179802+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:55.183023+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:55.183023+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:55.197943+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:55.658 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:55 vm03 bash[20708]: audit 2026-03-09T20:19:55.197943+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:55.688 DEBUG:teuthology.orchestra.run.vm08:> set -ex 2026-03-09T20:19:55.688 DEBUG:teuthology.orchestra.run.vm08:> dd if=/scratch_devs of=/dev/stdout 2026-03-09T20:19:55.691 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:19:55.691 DEBUG:teuthology.orchestra.run.vm08:> ls /dev/[sv]d? 2026-03-09T20:19:55.735 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vda 2026-03-09T20:19:55.736 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdb 2026-03-09T20:19:55.736 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdc 2026-03-09T20:19:55.736 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vdd 2026-03-09T20:19:55.736 INFO:teuthology.orchestra.run.vm08.stdout:/dev/vde 2026-03-09T20:19:55.736 WARNING:teuthology.misc:Removing root device: /dev/vda from device list 2026-03-09T20:19:55.736 DEBUG:teuthology.misc:devs=['/dev/vdb', '/dev/vdc', '/dev/vdd', '/dev/vde'] 2026-03-09T20:19:55.736 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdb 2026-03-09T20:19:55.780 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdb 2026-03-09T20:19:55.780 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T20:19:55.780 INFO:teuthology.orchestra.run.vm08.stdout:Device: 5h/5d Inode: 24 Links: 1 Device type: fe,10 2026-03-09T20:19:55.780 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:19:55.780 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-09 20:12:08.604258007 +0000 2026-03-09T20:19:55.780 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-09 20:12:07.520258007 +0000 2026-03-09T20:19:55.780 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-09 20:12:07.520258007 +0000 2026-03-09T20:19:55.780 INFO:teuthology.orchestra.run.vm08.stdout: Birth: - 2026-03-09T20:19:55.780 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdb of=/dev/null count=1 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.346919+0000 mgr.a (mgr.14150) 69 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=a;vm04=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.346919+0000 mgr.a (mgr.14150) 69 : audit [DBG] from='client.24104 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "placement": "2;vm03=a;vm04=b", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: cephadm 2026-03-09T20:19:54.348030+0000 mgr.a (mgr.14150) 70 : cephadm [INF] Saving service mgr spec with placement vm03=a;vm04=b;count:2 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: cephadm 2026-03-09T20:19:54.348030+0000 mgr.a (mgr.14150) 70 : cephadm [INF] Saving service mgr spec with placement vm03=a;vm04=b;count:2 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.352760+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.352760+0000 mon.a (mon.0) 254 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.353963+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.353963+0000 mon.a (mon.0) 255 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.355295+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.355295+0000 mon.a (mon.0) 256 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.355802+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.355802+0000 mon.a (mon.0) 257 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.360376+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.360376+0000 mon.a (mon.0) 258 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.361939+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.361939+0000 mon.a (mon.0) 259 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.364265+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.364265+0000 mon.a (mon.0) 260 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.b", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.367262+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.367262+0000 mon.a (mon.0) 261 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.368033+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:54.368033+0000 mon.a (mon.0) 262 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:55.172422+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:55.172422+0000 mon.a (mon.0) 263 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:55.176333+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:55.176333+0000 mon.a (mon.0) 264 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:55.179802+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:55.179802+0000 mon.a (mon.0) 265 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:55.183023+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:55.183023+0000 mon.a (mon.0) 266 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:55.197943+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:55.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:55 vm08 bash[23232]: audit 2026-03-09T20:19:55.197943+0000 mon.a (mon.0) 267 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:19:55.812 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-09T20:19:55.812 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-09T20:19:55.812 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000209342 s, 2.4 MB/s 2026-03-09T20:19:55.813 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdb 2026-03-09T20:19:55.829 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:55 vm04 bash[23235]: debug 2026-03-09T20:19:55.822+0000 7f48344d7140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T20:19:55.862 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdc 2026-03-09T20:19:55.908 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdc 2026-03-09T20:19:55.908 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T20:19:55.908 INFO:teuthology.orchestra.run.vm08.stdout:Device: 5h/5d Inode: 25 Links: 1 Device type: fe,20 2026-03-09T20:19:55.908 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:19:55.908 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-09 20:12:08.612258007 +0000 2026-03-09T20:19:55.908 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-09 20:12:07.492258007 +0000 2026-03-09T20:19:55.908 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-09 20:12:07.492258007 +0000 2026-03-09T20:19:55.908 INFO:teuthology.orchestra.run.vm08.stdout: Birth: - 2026-03-09T20:19:55.908 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdc of=/dev/null count=1 2026-03-09T20:19:55.955 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-09T20:19:55.955 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-09T20:19:55.955 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000150562 s, 3.4 MB/s 2026-03-09T20:19:55.956 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdc 2026-03-09T20:19:56.001 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vdd 2026-03-09T20:19:56.044 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vdd 2026-03-09T20:19:56.044 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T20:19:56.044 INFO:teuthology.orchestra.run.vm08.stdout:Device: 5h/5d Inode: 26 Links: 1 Device type: fe,30 2026-03-09T20:19:56.044 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:19:56.044 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-09 20:12:08.600258007 +0000 2026-03-09T20:19:56.044 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-09 20:12:07.520258007 +0000 2026-03-09T20:19:56.044 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-09 20:12:07.520258007 +0000 2026-03-09T20:19:56.044 INFO:teuthology.orchestra.run.vm08.stdout: Birth: - 2026-03-09T20:19:56.045 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vdd of=/dev/null count=1 2026-03-09T20:19:56.092 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-09T20:19:56.092 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-09T20:19:56.092 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000148026 s, 3.5 MB/s 2026-03-09T20:19:56.092 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vdd 2026-03-09T20:19:56.137 DEBUG:teuthology.orchestra.run.vm08:> stat /dev/vde 2026-03-09T20:19:56.179 INFO:teuthology.orchestra.run.vm08.stdout: File: /dev/vde 2026-03-09T20:19:56.179 INFO:teuthology.orchestra.run.vm08.stdout: Size: 0 Blocks: 0 IO Block: 4096 block special file 2026-03-09T20:19:56.179 INFO:teuthology.orchestra.run.vm08.stdout:Device: 5h/5d Inode: 27 Links: 1 Device type: fe,40 2026-03-09T20:19:56.179 INFO:teuthology.orchestra.run.vm08.stdout:Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk) 2026-03-09T20:19:56.179 INFO:teuthology.orchestra.run.vm08.stdout:Access: 2026-03-09 20:12:08.608258007 +0000 2026-03-09T20:19:56.180 INFO:teuthology.orchestra.run.vm08.stdout:Modify: 2026-03-09 20:12:07.492258007 +0000 2026-03-09T20:19:56.180 INFO:teuthology.orchestra.run.vm08.stdout:Change: 2026-03-09 20:12:07.492258007 +0000 2026-03-09T20:19:56.180 INFO:teuthology.orchestra.run.vm08.stdout: Birth: - 2026-03-09T20:19:56.180 DEBUG:teuthology.orchestra.run.vm08:> sudo dd if=/dev/vde of=/dev/null count=1 2026-03-09T20:19:56.228 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records in 2026-03-09T20:19:56.228 INFO:teuthology.orchestra.run.vm08.stderr:1+0 records out 2026-03-09T20:19:56.228 INFO:teuthology.orchestra.run.vm08.stderr:512 bytes copied, 0.000140192 s, 3.7 MB/s 2026-03-09T20:19:56.228 DEBUG:teuthology.orchestra.run.vm08:> ! mount | grep -v devtmpfs | grep -q /dev/vde 2026-03-09T20:19:56.273 INFO:tasks.cephadm:Deploying osd.0 on vm03 with /dev/vde... 2026-03-09T20:19:56.273 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- lvm zap /dev/vde 2026-03-09T20:19:56.617 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[22793]: cephadm 2026-03-09T20:19:54.368750+0000 mgr.a (mgr.14150) 71 : cephadm [INF] Deploying daemon mgr.b on vm04 2026-03-09T20:19:56.617 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[22793]: cephadm 2026-03-09T20:19:54.368750+0000 mgr.a (mgr.14150) 71 : cephadm [INF] Deploying daemon mgr.b on vm04 2026-03-09T20:19:56.617 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[22793]: cluster 2026-03-09T20:19:55.209323+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:56.617 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[22793]: cluster 2026-03-09T20:19:55.209323+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:56.617 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[23235]: debug 2026-03-09T20:19:56.274+0000 7f48344d7140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T20:19:56.617 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[23235]: debug 2026-03-09T20:19:56.358+0000 7f48344d7140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T20:19:56.617 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[23235]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T20:19:56.617 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[23235]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T20:19:56.617 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[23235]: from numpy import show_config as show_numpy_config 2026-03-09T20:19:56.617 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[23235]: debug 2026-03-09T20:19:56.478+0000 7f48344d7140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T20:19:56.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:56 vm03 bash[20708]: cephadm 2026-03-09T20:19:54.368750+0000 mgr.a (mgr.14150) 71 : cephadm [INF] Deploying daemon mgr.b on vm04 2026-03-09T20:19:56.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:56 vm03 bash[20708]: cephadm 2026-03-09T20:19:54.368750+0000 mgr.a (mgr.14150) 71 : cephadm [INF] Deploying daemon mgr.b on vm04 2026-03-09T20:19:56.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:56 vm03 bash[20708]: cluster 2026-03-09T20:19:55.209323+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:56.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:56 vm03 bash[20708]: cluster 2026-03-09T20:19:55.209323+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:56.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:56 vm08 bash[23232]: cephadm 2026-03-09T20:19:54.368750+0000 mgr.a (mgr.14150) 71 : cephadm [INF] Deploying daemon mgr.b on vm04 2026-03-09T20:19:56.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:56 vm08 bash[23232]: cephadm 2026-03-09T20:19:54.368750+0000 mgr.a (mgr.14150) 71 : cephadm [INF] Deploying daemon mgr.b on vm04 2026-03-09T20:19:56.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:56 vm08 bash[23232]: cluster 2026-03-09T20:19:55.209323+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:56.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:56 vm08 bash[23232]: cluster 2026-03-09T20:19:55.209323+0000 mgr.a (mgr.14150) 72 : cluster [DBG] pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:56.869 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[23235]: debug 2026-03-09T20:19:56.610+0000 7f48344d7140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T20:19:56.870 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[23235]: debug 2026-03-09T20:19:56.646+0000 7f48344d7140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T20:19:56.870 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[23235]: debug 2026-03-09T20:19:56.682+0000 7f48344d7140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T20:19:56.870 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[23235]: debug 2026-03-09T20:19:56.722+0000 7f48344d7140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T20:19:56.870 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:56 vm04 bash[23235]: debug 2026-03-09T20:19:56.774+0000 7f48344d7140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T20:19:57.464 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:57 vm04 bash[22793]: cluster 2026-03-09T20:19:57.209477+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:57.464 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:57 vm04 bash[22793]: cluster 2026-03-09T20:19:57.209477+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:57.464 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:57 vm04 bash[23235]: debug 2026-03-09T20:19:57.202+0000 7f48344d7140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T20:19:57.464 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:57 vm04 bash[23235]: debug 2026-03-09T20:19:57.238+0000 7f48344d7140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T20:19:57.464 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:57 vm04 bash[23235]: debug 2026-03-09T20:19:57.274+0000 7f48344d7140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T20:19:57.464 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:57 vm04 bash[23235]: debug 2026-03-09T20:19:57.414+0000 7f48344d7140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T20:19:57.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:57 vm08 bash[23232]: cluster 2026-03-09T20:19:57.209477+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:57.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:57 vm08 bash[23232]: cluster 2026-03-09T20:19:57.209477+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:57.821 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:57 vm04 bash[23235]: debug 2026-03-09T20:19:57.458+0000 7f48344d7140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T20:19:57.821 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:57 vm04 bash[23235]: debug 2026-03-09T20:19:57.498+0000 7f48344d7140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T20:19:57.821 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:57 vm04 bash[23235]: debug 2026-03-09T20:19:57.630+0000 7f48344d7140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:19:57.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:57 vm03 bash[20708]: cluster 2026-03-09T20:19:57.209477+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:57.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:57 vm03 bash[20708]: cluster 2026-03-09T20:19:57.209477+0000 mgr.a (mgr.14150) 73 : cluster [DBG] pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:19:58.087 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:57 vm04 bash[23235]: debug 2026-03-09T20:19:57.814+0000 7f48344d7140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T20:19:58.087 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[23235]: debug 2026-03-09T20:19:57.998+0000 7f48344d7140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T20:19:58.087 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[23235]: debug 2026-03-09T20:19:58.038+0000 7f48344d7140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T20:19:58.087 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[23235]: debug 2026-03-09T20:19:58.078+0000 7f48344d7140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T20:19:58.369 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[23235]: debug 2026-03-09T20:19:58.250+0000 7f48344d7140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:19:58.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[22793]: cluster 2026-03-09T20:19:58.582237+0000 mon.a (mon.0) 268 : cluster [DBG] Standby manager daemon b started 2026-03-09T20:19:58.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[22793]: cluster 2026-03-09T20:19:58.582237+0000 mon.a (mon.0) 268 : cluster [DBG] Standby manager daemon b started 2026-03-09T20:19:58.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[22793]: audit 2026-03-09T20:19:58.583597+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T20:19:58.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[22793]: audit 2026-03-09T20:19:58.583597+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T20:19:58.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[22793]: audit 2026-03-09T20:19:58.583966+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:19:58.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[22793]: audit 2026-03-09T20:19:58.583966+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:19:58.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[22793]: audit 2026-03-09T20:19:58.584520+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T20:19:58.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[22793]: audit 2026-03-09T20:19:58.584520+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T20:19:58.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[22793]: audit 2026-03-09T20:19:58.584757+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:19:58.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[22793]: audit 2026-03-09T20:19:58.584757+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:19:58.870 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:19:58 vm04 bash[23235]: debug 2026-03-09T20:19:58.570+0000 7f48344d7140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T20:19:58.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:58 vm03 bash[20708]: cluster 2026-03-09T20:19:58.582237+0000 mon.a (mon.0) 268 : cluster [DBG] Standby manager daemon b started 2026-03-09T20:19:58.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:58 vm03 bash[20708]: cluster 2026-03-09T20:19:58.582237+0000 mon.a (mon.0) 268 : cluster [DBG] Standby manager daemon b started 2026-03-09T20:19:58.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:58 vm03 bash[20708]: audit 2026-03-09T20:19:58.583597+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T20:19:58.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:58 vm03 bash[20708]: audit 2026-03-09T20:19:58.583597+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T20:19:58.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:58 vm03 bash[20708]: audit 2026-03-09T20:19:58.583966+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:19:58.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:58 vm03 bash[20708]: audit 2026-03-09T20:19:58.583966+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:19:58.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:58 vm03 bash[20708]: audit 2026-03-09T20:19:58.584520+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T20:19:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:58 vm03 bash[20708]: audit 2026-03-09T20:19:58.584520+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T20:19:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:58 vm03 bash[20708]: audit 2026-03-09T20:19:58.584757+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:19:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:58 vm03 bash[20708]: audit 2026-03-09T20:19:58.584757+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:19:59.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:58 vm08 bash[23232]: cluster 2026-03-09T20:19:58.582237+0000 mon.a (mon.0) 268 : cluster [DBG] Standby manager daemon b started 2026-03-09T20:19:59.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:58 vm08 bash[23232]: cluster 2026-03-09T20:19:58.582237+0000 mon.a (mon.0) 268 : cluster [DBG] Standby manager daemon b started 2026-03-09T20:19:59.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:58 vm08 bash[23232]: audit 2026-03-09T20:19:58.583597+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T20:19:59.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:58 vm08 bash[23232]: audit 2026-03-09T20:19:58.583597+0000 mon.c (mon.1) 3 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T20:19:59.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:58 vm08 bash[23232]: audit 2026-03-09T20:19:58.583966+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:19:59.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:58 vm08 bash[23232]: audit 2026-03-09T20:19:58.583966+0000 mon.c (mon.1) 4 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:19:59.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:58 vm08 bash[23232]: audit 2026-03-09T20:19:58.584520+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T20:19:59.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:58 vm08 bash[23232]: audit 2026-03-09T20:19:58.584520+0000 mon.c (mon.1) 5 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T20:19:59.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:58 vm08 bash[23232]: audit 2026-03-09T20:19:58.584757+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:19:59.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:58 vm08 bash[23232]: audit 2026-03-09T20:19:58.584757+0000 mon.c (mon.1) 6 : audit [DBG] from='mgr.? 192.168.123.104:0/4231541143' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:20:00.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:59 vm08 bash[23232]: cluster 2026-03-09T20:19:58.644308+0000 mon.a (mon.0) 269 : cluster [DBG] mgrmap e13: a(active, since 67s), standbys: b 2026-03-09T20:20:00.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:59 vm08 bash[23232]: cluster 2026-03-09T20:19:58.644308+0000 mon.a (mon.0) 269 : cluster [DBG] mgrmap e13: a(active, since 67s), standbys: b 2026-03-09T20:20:00.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:59 vm08 bash[23232]: audit 2026-03-09T20:19:58.644844+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T20:20:00.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:59 vm08 bash[23232]: audit 2026-03-09T20:19:58.644844+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T20:20:00.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:59 vm08 bash[23232]: cluster 2026-03-09T20:19:59.209634+0000 mgr.a (mgr.14150) 74 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:00.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:59 vm08 bash[23232]: cluster 2026-03-09T20:19:59.209634+0000 mgr.a (mgr.14150) 74 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:00.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:59 vm08 bash[23232]: audit 2026-03-09T20:19:59.368547+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:19:59 vm08 bash[23232]: audit 2026-03-09T20:19:59.368547+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:59 vm04 bash[22793]: cluster 2026-03-09T20:19:58.644308+0000 mon.a (mon.0) 269 : cluster [DBG] mgrmap e13: a(active, since 67s), standbys: b 2026-03-09T20:20:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:59 vm04 bash[22793]: cluster 2026-03-09T20:19:58.644308+0000 mon.a (mon.0) 269 : cluster [DBG] mgrmap e13: a(active, since 67s), standbys: b 2026-03-09T20:20:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:59 vm04 bash[22793]: audit 2026-03-09T20:19:58.644844+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T20:20:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:59 vm04 bash[22793]: audit 2026-03-09T20:19:58.644844+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T20:20:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:59 vm04 bash[22793]: cluster 2026-03-09T20:19:59.209634+0000 mgr.a (mgr.14150) 74 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:59 vm04 bash[22793]: cluster 2026-03-09T20:19:59.209634+0000 mgr.a (mgr.14150) 74 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:59 vm04 bash[22793]: audit 2026-03-09T20:19:59.368547+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:19:59 vm04 bash[22793]: audit 2026-03-09T20:19:59.368547+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:59 vm03 bash[20708]: cluster 2026-03-09T20:19:58.644308+0000 mon.a (mon.0) 269 : cluster [DBG] mgrmap e13: a(active, since 67s), standbys: b 2026-03-09T20:20:00.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:59 vm03 bash[20708]: cluster 2026-03-09T20:19:58.644308+0000 mon.a (mon.0) 269 : cluster [DBG] mgrmap e13: a(active, since 67s), standbys: b 2026-03-09T20:20:00.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:59 vm03 bash[20708]: audit 2026-03-09T20:19:58.644844+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T20:20:00.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:59 vm03 bash[20708]: audit 2026-03-09T20:19:58.644844+0000 mon.a (mon.0) 270 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T20:20:00.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:59 vm03 bash[20708]: cluster 2026-03-09T20:19:59.209634+0000 mgr.a (mgr.14150) 74 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:00.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:59 vm03 bash[20708]: cluster 2026-03-09T20:19:59.209634+0000 mgr.a (mgr.14150) 74 : cluster [DBG] pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:00.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:59 vm03 bash[20708]: audit 2026-03-09T20:19:59.368547+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:19:59 vm03 bash[20708]: audit 2026-03-09T20:19:59.368547+0000 mon.a (mon.0) 271 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.896 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: cluster 2026-03-09T20:20:00.000096+0000 mon.a (mon.0) 272 : cluster [INF] overall HEALTH_OK 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: cluster 2026-03-09T20:20:00.000096+0000 mon.a (mon.0) 272 : cluster [INF] overall HEALTH_OK 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.162123+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.162123+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.185924+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.185924+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.186451+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.186451+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.186839+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.186839+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.190501+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.190501+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: cephadm 2026-03-09T20:20:00.205177+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: cephadm 2026-03-09T20:20:00.205177+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.205328+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.205328+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.205797+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.205797+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.206117+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.206117+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: cephadm 2026-03-09T20:20:00.206471+0000 mgr.a (mgr.14150) 76 : cephadm [INF] Reconfiguring daemon mgr.a on vm03 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: cephadm 2026-03-09T20:20:00.206471+0000 mgr.a (mgr.14150) 76 : cephadm [INF] Reconfiguring daemon mgr.a on vm03 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.627305+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.627305+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.641272+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.641272+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.642200+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:00 vm03 bash[20708]: audit 2026-03-09T20:20:00.642200+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:01.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: cluster 2026-03-09T20:20:00.000096+0000 mon.a (mon.0) 272 : cluster [INF] overall HEALTH_OK 2026-03-09T20:20:01.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: cluster 2026-03-09T20:20:00.000096+0000 mon.a (mon.0) 272 : cluster [INF] overall HEALTH_OK 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.162123+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.162123+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.185924+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.185924+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.186451+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.186451+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.186839+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.186839+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.190501+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.190501+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: cephadm 2026-03-09T20:20:00.205177+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: cephadm 2026-03-09T20:20:00.205177+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.205328+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.205328+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.205797+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.205797+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.206117+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.206117+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: cephadm 2026-03-09T20:20:00.206471+0000 mgr.a (mgr.14150) 76 : cephadm [INF] Reconfiguring daemon mgr.a on vm03 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: cephadm 2026-03-09T20:20:00.206471+0000 mgr.a (mgr.14150) 76 : cephadm [INF] Reconfiguring daemon mgr.a on vm03 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.627305+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.627305+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.641272+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.641272+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.642200+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:01.056 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:00 vm08 bash[23232]: audit 2026-03-09T20:20:00.642200+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:01.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: cluster 2026-03-09T20:20:00.000096+0000 mon.a (mon.0) 272 : cluster [INF] overall HEALTH_OK 2026-03-09T20:20:01.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: cluster 2026-03-09T20:20:00.000096+0000 mon.a (mon.0) 272 : cluster [INF] overall HEALTH_OK 2026-03-09T20:20:01.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.162123+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.162123+0000 mon.a (mon.0) 273 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.185924+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.185924+0000 mon.a (mon.0) 274 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.186451+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:01.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.186451+0000 mon.a (mon.0) 275 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:01.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.186839+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:01.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.186839+0000 mon.a (mon.0) 276 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:01.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.190501+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.190501+0000 mon.a (mon.0) 277 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: cephadm 2026-03-09T20:20:00.205177+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: cephadm 2026-03-09T20:20:00.205177+0000 mgr.a (mgr.14150) 75 : cephadm [INF] Reconfiguring mgr.a (unknown last config time)... 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.205328+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.205328+0000 mon.a (mon.0) 278 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.a", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.205797+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.205797+0000 mon.a (mon.0) 279 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr services"}]: dispatch 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.206117+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.206117+0000 mon.a (mon.0) 280 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: cephadm 2026-03-09T20:20:00.206471+0000 mgr.a (mgr.14150) 76 : cephadm [INF] Reconfiguring daemon mgr.a on vm03 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: cephadm 2026-03-09T20:20:00.206471+0000 mgr.a (mgr.14150) 76 : cephadm [INF] Reconfiguring daemon mgr.a on vm03 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.627305+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.627305+0000 mon.a (mon.0) 281 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.641272+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.641272+0000 mon.a (mon.0) 282 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.642200+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:01.120 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:00 vm04 bash[22793]: audit 2026-03-09T20:20:00.642200+0000 mon.a (mon.0) 283 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:01.740 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:20:01.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:01 vm03 bash[20708]: audit 2026-03-09T20:20:00.967243+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:01.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:01 vm03 bash[20708]: audit 2026-03-09T20:20:00.967243+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:01.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:01 vm03 bash[20708]: audit 2026-03-09T20:20:00.967967+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:01.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:01 vm03 bash[20708]: audit 2026-03-09T20:20:00.967967+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:01.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:01 vm03 bash[20708]: audit 2026-03-09T20:20:00.973732+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:01 vm03 bash[20708]: audit 2026-03-09T20:20:00.973732+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:01.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:01 vm03 bash[20708]: cluster 2026-03-09T20:20:01.209788+0000 mgr.a (mgr.14150) 77 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:01.754 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:01 vm03 bash[20708]: cluster 2026-03-09T20:20:01.209788+0000 mgr.a (mgr.14150) 77 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:01.755 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph orch daemon add osd vm03:/dev/vde 2026-03-09T20:20:02.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:01 vm08 bash[23232]: audit 2026-03-09T20:20:00.967243+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:02.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:01 vm08 bash[23232]: audit 2026-03-09T20:20:00.967243+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:02.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:01 vm08 bash[23232]: audit 2026-03-09T20:20:00.967967+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:02.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:01 vm08 bash[23232]: audit 2026-03-09T20:20:00.967967+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:02.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:01 vm08 bash[23232]: audit 2026-03-09T20:20:00.973732+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:02.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:01 vm08 bash[23232]: audit 2026-03-09T20:20:00.973732+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:02.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:01 vm08 bash[23232]: cluster 2026-03-09T20:20:01.209788+0000 mgr.a (mgr.14150) 77 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:02.055 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:01 vm08 bash[23232]: cluster 2026-03-09T20:20:01.209788+0000 mgr.a (mgr.14150) 77 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:02.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:01 vm04 bash[22793]: audit 2026-03-09T20:20:00.967243+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:02.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:01 vm04 bash[22793]: audit 2026-03-09T20:20:00.967243+0000 mon.a (mon.0) 284 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:02.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:01 vm04 bash[22793]: audit 2026-03-09T20:20:00.967967+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:02.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:01 vm04 bash[22793]: audit 2026-03-09T20:20:00.967967+0000 mon.a (mon.0) 285 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:02.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:01 vm04 bash[22793]: audit 2026-03-09T20:20:00.973732+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:02.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:01 vm04 bash[22793]: audit 2026-03-09T20:20:00.973732+0000 mon.a (mon.0) 286 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:02.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:01 vm04 bash[22793]: cluster 2026-03-09T20:20:01.209788+0000 mgr.a (mgr.14150) 77 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:02.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:01 vm04 bash[22793]: cluster 2026-03-09T20:20:01.209788+0000 mgr.a (mgr.14150) 77 : cluster [DBG] pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:04.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:04 vm08 bash[23232]: cluster 2026-03-09T20:20:03.210036+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:04.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:04 vm08 bash[23232]: cluster 2026-03-09T20:20:03.210036+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:04.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:04 vm04 bash[22793]: cluster 2026-03-09T20:20:03.210036+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:04.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:04 vm04 bash[22793]: cluster 2026-03-09T20:20:03.210036+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:04.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:04 vm03 bash[20708]: cluster 2026-03-09T20:20:03.210036+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:04.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:04 vm03 bash[20708]: cluster 2026-03-09T20:20:03.210036+0000 mgr.a (mgr.14150) 78 : cluster [DBG] pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:06.364 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:20:06.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:06 vm08 bash[23232]: cluster 2026-03-09T20:20:05.210253+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:06.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:06 vm08 bash[23232]: cluster 2026-03-09T20:20:05.210253+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:06.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:06 vm04 bash[22793]: cluster 2026-03-09T20:20:05.210253+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:06.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:06 vm04 bash[22793]: cluster 2026-03-09T20:20:05.210253+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:06.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:06 vm03 bash[20708]: cluster 2026-03-09T20:20:05.210253+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:06.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:06 vm03 bash[20708]: cluster 2026-03-09T20:20:05.210253+0000 mgr.a (mgr.14150) 79 : cluster [DBG] pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:07.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:07 vm08 bash[23232]: audit 2026-03-09T20:20:06.611448+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:07.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:07 vm08 bash[23232]: audit 2026-03-09T20:20:06.611448+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:07.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:07 vm08 bash[23232]: audit 2026-03-09T20:20:06.612601+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:07.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:07 vm08 bash[23232]: audit 2026-03-09T20:20:06.612601+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:07.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:07 vm08 bash[23232]: audit 2026-03-09T20:20:06.613006+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:07.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:07 vm08 bash[23232]: audit 2026-03-09T20:20:06.613006+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:07.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:07 vm04 bash[22793]: audit 2026-03-09T20:20:06.611448+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:07.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:07 vm04 bash[22793]: audit 2026-03-09T20:20:06.611448+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:07.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:07 vm04 bash[22793]: audit 2026-03-09T20:20:06.612601+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:07.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:07 vm04 bash[22793]: audit 2026-03-09T20:20:06.612601+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:07.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:07 vm04 bash[22793]: audit 2026-03-09T20:20:06.613006+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:07.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:07 vm04 bash[22793]: audit 2026-03-09T20:20:06.613006+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:07.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:07 vm03 bash[20708]: audit 2026-03-09T20:20:06.611448+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:07.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:07 vm03 bash[20708]: audit 2026-03-09T20:20:06.611448+0000 mon.a (mon.0) 287 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:07.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:07 vm03 bash[20708]: audit 2026-03-09T20:20:06.612601+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:07.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:07 vm03 bash[20708]: audit 2026-03-09T20:20:06.612601+0000 mon.a (mon.0) 288 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:07.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:07 vm03 bash[20708]: audit 2026-03-09T20:20:06.613006+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:07.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:07 vm03 bash[20708]: audit 2026-03-09T20:20:06.613006+0000 mon.a (mon.0) 289 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:08.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:08 vm08 bash[23232]: audit 2026-03-09T20:20:06.610018+0000 mgr.a (mgr.14150) 80 : audit [DBG] from='client.14211 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:08.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:08 vm08 bash[23232]: audit 2026-03-09T20:20:06.610018+0000 mgr.a (mgr.14150) 80 : audit [DBG] from='client.14211 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:08.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:08 vm08 bash[23232]: cluster 2026-03-09T20:20:07.210515+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:08.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:08 vm08 bash[23232]: cluster 2026-03-09T20:20:07.210515+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:08.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:08 vm04 bash[22793]: audit 2026-03-09T20:20:06.610018+0000 mgr.a (mgr.14150) 80 : audit [DBG] from='client.14211 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:08.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:08 vm04 bash[22793]: audit 2026-03-09T20:20:06.610018+0000 mgr.a (mgr.14150) 80 : audit [DBG] from='client.14211 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:08.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:08 vm04 bash[22793]: cluster 2026-03-09T20:20:07.210515+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:08.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:08 vm04 bash[22793]: cluster 2026-03-09T20:20:07.210515+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:08.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:08 vm03 bash[20708]: audit 2026-03-09T20:20:06.610018+0000 mgr.a (mgr.14150) 80 : audit [DBG] from='client.14211 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:08.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:08 vm03 bash[20708]: audit 2026-03-09T20:20:06.610018+0000 mgr.a (mgr.14150) 80 : audit [DBG] from='client.14211 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm03:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:08.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:08 vm03 bash[20708]: cluster 2026-03-09T20:20:07.210515+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:08.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:08 vm03 bash[20708]: cluster 2026-03-09T20:20:07.210515+0000 mgr.a (mgr.14150) 81 : cluster [DBG] pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:10.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:10 vm08 bash[23232]: cluster 2026-03-09T20:20:09.210855+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:10.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:10 vm08 bash[23232]: cluster 2026-03-09T20:20:09.210855+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:10.569 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:10 vm03 bash[20708]: cluster 2026-03-09T20:20:09.210855+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:10.569 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:10 vm03 bash[20708]: cluster 2026-03-09T20:20:09.210855+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:10.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:10 vm04 bash[22793]: cluster 2026-03-09T20:20:09.210855+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:10.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:10 vm04 bash[22793]: cluster 2026-03-09T20:20:09.210855+0000 mgr.a (mgr.14150) 82 : cluster [DBG] pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:11.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:11 vm08 bash[23232]: audit 2026-03-09T20:20:11.091185+0000 mon.a (mon.0) 290 : audit [INF] from='client.? 192.168.123.103:0/360978171' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fe2e9dff-b6c3-47c6-b589-1294f3dee050"}]: dispatch 2026-03-09T20:20:11.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:11 vm08 bash[23232]: audit 2026-03-09T20:20:11.091185+0000 mon.a (mon.0) 290 : audit [INF] from='client.? 192.168.123.103:0/360978171' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fe2e9dff-b6c3-47c6-b589-1294f3dee050"}]: dispatch 2026-03-09T20:20:11.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:11 vm08 bash[23232]: audit 2026-03-09T20:20:11.094123+0000 mon.a (mon.0) 291 : audit [INF] from='client.? 192.168.123.103:0/360978171' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fe2e9dff-b6c3-47c6-b589-1294f3dee050"}]': finished 2026-03-09T20:20:11.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:11 vm08 bash[23232]: audit 2026-03-09T20:20:11.094123+0000 mon.a (mon.0) 291 : audit [INF] from='client.? 192.168.123.103:0/360978171' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fe2e9dff-b6c3-47c6-b589-1294f3dee050"}]': finished 2026-03-09T20:20:11.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:11 vm08 bash[23232]: cluster 2026-03-09T20:20:11.097037+0000 mon.a (mon.0) 292 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T20:20:11.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:11 vm08 bash[23232]: cluster 2026-03-09T20:20:11.097037+0000 mon.a (mon.0) 292 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T20:20:11.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:11 vm08 bash[23232]: audit 2026-03-09T20:20:11.097224+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:11.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:11 vm08 bash[23232]: audit 2026-03-09T20:20:11.097224+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:11.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:11 vm04 bash[22793]: audit 2026-03-09T20:20:11.091185+0000 mon.a (mon.0) 290 : audit [INF] from='client.? 192.168.123.103:0/360978171' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fe2e9dff-b6c3-47c6-b589-1294f3dee050"}]: dispatch 2026-03-09T20:20:11.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:11 vm04 bash[22793]: audit 2026-03-09T20:20:11.091185+0000 mon.a (mon.0) 290 : audit [INF] from='client.? 192.168.123.103:0/360978171' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fe2e9dff-b6c3-47c6-b589-1294f3dee050"}]: dispatch 2026-03-09T20:20:11.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:11 vm04 bash[22793]: audit 2026-03-09T20:20:11.094123+0000 mon.a (mon.0) 291 : audit [INF] from='client.? 192.168.123.103:0/360978171' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fe2e9dff-b6c3-47c6-b589-1294f3dee050"}]': finished 2026-03-09T20:20:11.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:11 vm04 bash[22793]: audit 2026-03-09T20:20:11.094123+0000 mon.a (mon.0) 291 : audit [INF] from='client.? 192.168.123.103:0/360978171' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fe2e9dff-b6c3-47c6-b589-1294f3dee050"}]': finished 2026-03-09T20:20:11.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:11 vm04 bash[22793]: cluster 2026-03-09T20:20:11.097037+0000 mon.a (mon.0) 292 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T20:20:11.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:11 vm04 bash[22793]: cluster 2026-03-09T20:20:11.097037+0000 mon.a (mon.0) 292 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T20:20:11.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:11 vm04 bash[22793]: audit 2026-03-09T20:20:11.097224+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:11.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:11 vm04 bash[22793]: audit 2026-03-09T20:20:11.097224+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:11.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:11 vm03 bash[20708]: audit 2026-03-09T20:20:11.091185+0000 mon.a (mon.0) 290 : audit [INF] from='client.? 192.168.123.103:0/360978171' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fe2e9dff-b6c3-47c6-b589-1294f3dee050"}]: dispatch 2026-03-09T20:20:11.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:11 vm03 bash[20708]: audit 2026-03-09T20:20:11.091185+0000 mon.a (mon.0) 290 : audit [INF] from='client.? 192.168.123.103:0/360978171' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "fe2e9dff-b6c3-47c6-b589-1294f3dee050"}]: dispatch 2026-03-09T20:20:11.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:11 vm03 bash[20708]: audit 2026-03-09T20:20:11.094123+0000 mon.a (mon.0) 291 : audit [INF] from='client.? 192.168.123.103:0/360978171' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fe2e9dff-b6c3-47c6-b589-1294f3dee050"}]': finished 2026-03-09T20:20:11.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:11 vm03 bash[20708]: audit 2026-03-09T20:20:11.094123+0000 mon.a (mon.0) 291 : audit [INF] from='client.? 192.168.123.103:0/360978171' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "fe2e9dff-b6c3-47c6-b589-1294f3dee050"}]': finished 2026-03-09T20:20:11.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:11 vm03 bash[20708]: cluster 2026-03-09T20:20:11.097037+0000 mon.a (mon.0) 292 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T20:20:11.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:11 vm03 bash[20708]: cluster 2026-03-09T20:20:11.097037+0000 mon.a (mon.0) 292 : cluster [DBG] osdmap e5: 1 total, 0 up, 1 in 2026-03-09T20:20:11.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:11 vm03 bash[20708]: audit 2026-03-09T20:20:11.097224+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:11.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:11 vm03 bash[20708]: audit 2026-03-09T20:20:11.097224+0000 mon.a (mon.0) 293 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:12.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:12 vm08 bash[23232]: cluster 2026-03-09T20:20:11.211116+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:12.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:12 vm08 bash[23232]: cluster 2026-03-09T20:20:11.211116+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:12.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:12 vm08 bash[23232]: audit 2026-03-09T20:20:11.708156+0000 mon.a (mon.0) 294 : audit [DBG] from='client.? 192.168.123.103:0/3377620028' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:12.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:12 vm08 bash[23232]: audit 2026-03-09T20:20:11.708156+0000 mon.a (mon.0) 294 : audit [DBG] from='client.? 192.168.123.103:0/3377620028' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:12.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:12 vm04 bash[22793]: cluster 2026-03-09T20:20:11.211116+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:12.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:12 vm04 bash[22793]: cluster 2026-03-09T20:20:11.211116+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:12.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:12 vm04 bash[22793]: audit 2026-03-09T20:20:11.708156+0000 mon.a (mon.0) 294 : audit [DBG] from='client.? 192.168.123.103:0/3377620028' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:12.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:12 vm04 bash[22793]: audit 2026-03-09T20:20:11.708156+0000 mon.a (mon.0) 294 : audit [DBG] from='client.? 192.168.123.103:0/3377620028' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:12.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:12 vm03 bash[20708]: cluster 2026-03-09T20:20:11.211116+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:12.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:12 vm03 bash[20708]: cluster 2026-03-09T20:20:11.211116+0000 mgr.a (mgr.14150) 83 : cluster [DBG] pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:12.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:12 vm03 bash[20708]: audit 2026-03-09T20:20:11.708156+0000 mon.a (mon.0) 294 : audit [DBG] from='client.? 192.168.123.103:0/3377620028' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:12.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:12 vm03 bash[20708]: audit 2026-03-09T20:20:11.708156+0000 mon.a (mon.0) 294 : audit [DBG] from='client.? 192.168.123.103:0/3377620028' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:14.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:14 vm04 bash[22793]: cluster 2026-03-09T20:20:13.211386+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:14.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:14 vm04 bash[22793]: cluster 2026-03-09T20:20:13.211386+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:14.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:14 vm03 bash[20708]: cluster 2026-03-09T20:20:13.211386+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:14.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:14 vm03 bash[20708]: cluster 2026-03-09T20:20:13.211386+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:14.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:14 vm08 bash[23232]: cluster 2026-03-09T20:20:13.211386+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:14.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:14 vm08 bash[23232]: cluster 2026-03-09T20:20:13.211386+0000 mgr.a (mgr.14150) 84 : cluster [DBG] pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:16.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:16 vm04 bash[22793]: cluster 2026-03-09T20:20:15.211636+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:16.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:16 vm04 bash[22793]: cluster 2026-03-09T20:20:15.211636+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:16.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:16 vm03 bash[20708]: cluster 2026-03-09T20:20:15.211636+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:16.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:16 vm03 bash[20708]: cluster 2026-03-09T20:20:15.211636+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:16.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:16 vm08 bash[23232]: cluster 2026-03-09T20:20:15.211636+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:16.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:16 vm08 bash[23232]: cluster 2026-03-09T20:20:15.211636+0000 mgr.a (mgr.14150) 85 : cluster [DBG] pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:18.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:18 vm04 bash[22793]: cluster 2026-03-09T20:20:17.211807+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:18.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:18 vm04 bash[22793]: cluster 2026-03-09T20:20:17.211807+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:18.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:18 vm03 bash[20708]: cluster 2026-03-09T20:20:17.211807+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:18.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:18 vm03 bash[20708]: cluster 2026-03-09T20:20:17.211807+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:18.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:18 vm08 bash[23232]: cluster 2026-03-09T20:20:17.211807+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:18.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:18 vm08 bash[23232]: cluster 2026-03-09T20:20:17.211807+0000 mgr.a (mgr.14150) 86 : cluster [DBG] pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:20.357 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:20 vm03 bash[20708]: cluster 2026-03-09T20:20:19.212019+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:20.357 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:20 vm03 bash[20708]: cluster 2026-03-09T20:20:19.212019+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:20.357 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:20 vm03 bash[20708]: audit 2026-03-09T20:20:20.116773+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:20:20.357 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:20 vm03 bash[20708]: audit 2026-03-09T20:20:20.116773+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:20:20.357 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:20 vm03 bash[20708]: audit 2026-03-09T20:20:20.117280+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:20.357 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:20 vm03 bash[20708]: audit 2026-03-09T20:20:20.117280+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:20.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:20 vm04 bash[22793]: cluster 2026-03-09T20:20:19.212019+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:20.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:20 vm04 bash[22793]: cluster 2026-03-09T20:20:19.212019+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:20.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:20 vm04 bash[22793]: audit 2026-03-09T20:20:20.116773+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:20:20.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:20 vm04 bash[22793]: audit 2026-03-09T20:20:20.116773+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:20:20.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:20 vm04 bash[22793]: audit 2026-03-09T20:20:20.117280+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:20.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:20 vm04 bash[22793]: audit 2026-03-09T20:20:20.117280+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:20.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:20 vm08 bash[23232]: cluster 2026-03-09T20:20:19.212019+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:20.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:20 vm08 bash[23232]: cluster 2026-03-09T20:20:19.212019+0000 mgr.a (mgr.14150) 87 : cluster [DBG] pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:20.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:20 vm08 bash[23232]: audit 2026-03-09T20:20:20.116773+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:20:20.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:20 vm08 bash[23232]: audit 2026-03-09T20:20:20.116773+0000 mon.a (mon.0) 295 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch 2026-03-09T20:20:20.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:20 vm08 bash[23232]: audit 2026-03-09T20:20:20.117280+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:20.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:20 vm08 bash[23232]: audit 2026-03-09T20:20:20.117280+0000 mon.a (mon.0) 296 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:20.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:20 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:20:20.907 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:20:20 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:20:21.208 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:21 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:20:21.208 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:20:21 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:20:21.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:21 vm04 bash[22793]: cephadm 2026-03-09T20:20:20.117721+0000 mgr.a (mgr.14150) 88 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T20:20:21.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:21 vm04 bash[22793]: cephadm 2026-03-09T20:20:20.117721+0000 mgr.a (mgr.14150) 88 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T20:20:21.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:21 vm04 bash[22793]: audit 2026-03-09T20:20:21.115634+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:21.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:21 vm04 bash[22793]: audit 2026-03-09T20:20:21.115634+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:21.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:21 vm04 bash[22793]: audit 2026-03-09T20:20:21.120788+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:21.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:21 vm04 bash[22793]: audit 2026-03-09T20:20:21.120788+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:21.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:21 vm04 bash[22793]: audit 2026-03-09T20:20:21.124904+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:21.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:21 vm04 bash[22793]: audit 2026-03-09T20:20:21.124904+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:21.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:21 vm03 bash[20708]: cephadm 2026-03-09T20:20:20.117721+0000 mgr.a (mgr.14150) 88 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T20:20:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:21 vm03 bash[20708]: cephadm 2026-03-09T20:20:20.117721+0000 mgr.a (mgr.14150) 88 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T20:20:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:21 vm03 bash[20708]: audit 2026-03-09T20:20:21.115634+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:21 vm03 bash[20708]: audit 2026-03-09T20:20:21.115634+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:21 vm03 bash[20708]: audit 2026-03-09T20:20:21.120788+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:21 vm03 bash[20708]: audit 2026-03-09T20:20:21.120788+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:21 vm03 bash[20708]: audit 2026-03-09T20:20:21.124904+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:21.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:21 vm03 bash[20708]: audit 2026-03-09T20:20:21.124904+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:21.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:21 vm08 bash[23232]: cephadm 2026-03-09T20:20:20.117721+0000 mgr.a (mgr.14150) 88 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T20:20:21.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:21 vm08 bash[23232]: cephadm 2026-03-09T20:20:20.117721+0000 mgr.a (mgr.14150) 88 : cephadm [INF] Deploying daemon osd.0 on vm03 2026-03-09T20:20:21.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:21 vm08 bash[23232]: audit 2026-03-09T20:20:21.115634+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:21.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:21 vm08 bash[23232]: audit 2026-03-09T20:20:21.115634+0000 mon.a (mon.0) 297 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:21.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:21 vm08 bash[23232]: audit 2026-03-09T20:20:21.120788+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:21.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:21 vm08 bash[23232]: audit 2026-03-09T20:20:21.120788+0000 mon.a (mon.0) 298 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:21.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:21 vm08 bash[23232]: audit 2026-03-09T20:20:21.124904+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:21.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:21 vm08 bash[23232]: audit 2026-03-09T20:20:21.124904+0000 mon.a (mon.0) 299 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:22.603 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:22 vm03 bash[20708]: cluster 2026-03-09T20:20:21.212224+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:22.603 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:22 vm03 bash[20708]: cluster 2026-03-09T20:20:21.212224+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:22 vm04 bash[22793]: cluster 2026-03-09T20:20:21.212224+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:22 vm04 bash[22793]: cluster 2026-03-09T20:20:21.212224+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:22.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:22 vm08 bash[23232]: cluster 2026-03-09T20:20:21.212224+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:22 vm08 bash[23232]: cluster 2026-03-09T20:20:21.212224+0000 mgr.a (mgr.14150) 89 : cluster [DBG] pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:24 vm04 bash[22793]: cluster 2026-03-09T20:20:23.212522+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:24 vm04 bash[22793]: cluster 2026-03-09T20:20:23.212522+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:24.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:24 vm03 bash[20708]: cluster 2026-03-09T20:20:23.212522+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:24.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:24 vm03 bash[20708]: cluster 2026-03-09T20:20:23.212522+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:24.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:24 vm08 bash[23232]: cluster 2026-03-09T20:20:23.212522+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:24.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:24 vm08 bash[23232]: cluster 2026-03-09T20:20:23.212522+0000 mgr.a (mgr.14150) 90 : cluster [DBG] pgmap v41: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:25.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:25 vm04 bash[22793]: audit 2026-03-09T20:20:24.660356+0000 mon.a (mon.0) 300 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T20:20:25.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:25 vm04 bash[22793]: audit 2026-03-09T20:20:24.660356+0000 mon.a (mon.0) 300 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T20:20:25.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:25 vm03 bash[20708]: audit 2026-03-09T20:20:24.660356+0000 mon.a (mon.0) 300 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T20:20:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:25 vm03 bash[20708]: audit 2026-03-09T20:20:24.660356+0000 mon.a (mon.0) 300 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T20:20:25.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:25 vm08 bash[23232]: audit 2026-03-09T20:20:24.660356+0000 mon.a (mon.0) 300 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T20:20:25.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:25 vm08 bash[23232]: audit 2026-03-09T20:20:24.660356+0000 mon.a (mon.0) 300 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch 2026-03-09T20:20:26.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:26 vm04 bash[22793]: cluster 2026-03-09T20:20:25.212756+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:26.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:26 vm04 bash[22793]: cluster 2026-03-09T20:20:25.212756+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:26.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:26 vm04 bash[22793]: audit 2026-03-09T20:20:25.348072+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T20:20:26.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:26 vm04 bash[22793]: audit 2026-03-09T20:20:25.348072+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T20:20:26.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:26 vm04 bash[22793]: cluster 2026-03-09T20:20:25.349461+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T20:20:26.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:26 vm04 bash[22793]: cluster 2026-03-09T20:20:25.349461+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T20:20:26.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:26 vm04 bash[22793]: audit 2026-03-09T20:20:25.349752+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T20:20:26.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:26 vm04 bash[22793]: audit 2026-03-09T20:20:25.349752+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T20:20:26.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:26 vm04 bash[22793]: audit 2026-03-09T20:20:25.349830+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:26.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:26 vm04 bash[22793]: audit 2026-03-09T20:20:25.349830+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:26.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:26 vm03 bash[20708]: cluster 2026-03-09T20:20:25.212756+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:26.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:26 vm03 bash[20708]: cluster 2026-03-09T20:20:25.212756+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:26.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:26 vm03 bash[20708]: audit 2026-03-09T20:20:25.348072+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T20:20:26.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:26 vm03 bash[20708]: audit 2026-03-09T20:20:25.348072+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T20:20:26.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:26 vm03 bash[20708]: cluster 2026-03-09T20:20:25.349461+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T20:20:26.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:26 vm03 bash[20708]: cluster 2026-03-09T20:20:25.349461+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T20:20:26.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:26 vm03 bash[20708]: audit 2026-03-09T20:20:25.349752+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T20:20:26.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:26 vm03 bash[20708]: audit 2026-03-09T20:20:25.349752+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T20:20:26.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:26 vm03 bash[20708]: audit 2026-03-09T20:20:25.349830+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:26.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:26 vm03 bash[20708]: audit 2026-03-09T20:20:25.349830+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:26.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:26 vm08 bash[23232]: cluster 2026-03-09T20:20:25.212756+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:26.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:26 vm08 bash[23232]: cluster 2026-03-09T20:20:25.212756+0000 mgr.a (mgr.14150) 91 : cluster [DBG] pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:26.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:26 vm08 bash[23232]: audit 2026-03-09T20:20:25.348072+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T20:20:26.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:26 vm08 bash[23232]: audit 2026-03-09T20:20:25.348072+0000 mon.a (mon.0) 301 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished 2026-03-09T20:20:26.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:26 vm08 bash[23232]: cluster 2026-03-09T20:20:25.349461+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T20:20:26.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:26 vm08 bash[23232]: cluster 2026-03-09T20:20:25.349461+0000 mon.a (mon.0) 302 : cluster [DBG] osdmap e6: 1 total, 0 up, 1 in 2026-03-09T20:20:26.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:26 vm08 bash[23232]: audit 2026-03-09T20:20:25.349752+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T20:20:26.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:26 vm08 bash[23232]: audit 2026-03-09T20:20:25.349752+0000 mon.a (mon.0) 303 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]: dispatch 2026-03-09T20:20:26.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:26 vm08 bash[23232]: audit 2026-03-09T20:20:25.349830+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:26.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:26 vm08 bash[23232]: audit 2026-03-09T20:20:25.349830+0000 mon.a (mon.0) 304 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:26.354172+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:26.354172+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: cluster 2026-03-09T20:20:26.356299+0000 mon.a (mon.0) 306 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: cluster 2026-03-09T20:20:26.356299+0000 mon.a (mon.0) 306 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:26.357268+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:26.357268+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:26.359921+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:26.359921+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:27.251992+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:27.251992+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:27.264390+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:27.264390+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:27.265359+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:27.265359+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:27.266259+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:27.266259+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:27.272263+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:27.272263+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:27.309127+0000 mon.a (mon.0) 314 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:27.309127+0000 mon.a (mon.0) 314 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:27.360089+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:27 vm03 bash[20708]: audit 2026-03-09T20:20:27.360089+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:26.354172+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:26.354172+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: cluster 2026-03-09T20:20:26.356299+0000 mon.a (mon.0) 306 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: cluster 2026-03-09T20:20:26.356299+0000 mon.a (mon.0) 306 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:26.357268+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:26.357268+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:26.359921+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:26.359921+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:27.251992+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:27.251992+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:27.264390+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:27.264390+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:27.265359+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:27.265359+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:27.266259+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:27.266259+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:27.272263+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:27.272263+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:27.309127+0000 mon.a (mon.0) 314 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:27.309127+0000 mon.a (mon.0) 314 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:27.360089+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:27 vm08 bash[23232]: audit 2026-03-09T20:20:27.360089+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:26.354172+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T20:20:27.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:26.354172+0000 mon.a (mon.0) 305 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=vm03", "root=default"]}]': finished 2026-03-09T20:20:27.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: cluster 2026-03-09T20:20:26.356299+0000 mon.a (mon.0) 306 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T20:20:27.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: cluster 2026-03-09T20:20:26.356299+0000 mon.a (mon.0) 306 : cluster [DBG] osdmap e7: 1 total, 0 up, 1 in 2026-03-09T20:20:27.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:26.357268+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:26.357268+0000 mon.a (mon.0) 307 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:26.359921+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:26.359921+0000 mon.a (mon.0) 308 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:27.251992+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:27.251992+0000 mon.a (mon.0) 309 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:27.264390+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:27.264390+0000 mon.a (mon.0) 310 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:27.265359+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:27.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:27.265359+0000 mon.a (mon.0) 311 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:27.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:27.266259+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:27.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:27.266259+0000 mon.a (mon.0) 312 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:27.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:27.272263+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:27.272263+0000 mon.a (mon.0) 313 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:27.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:27.309127+0000 mon.a (mon.0) 314 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' 2026-03-09T20:20:27.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:27.309127+0000 mon.a (mon.0) 314 : audit [INF] from='osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613]' entity='osd.0' 2026-03-09T20:20:27.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:27.360089+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:27.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:27 vm04 bash[22793]: audit 2026-03-09T20:20:27.360089+0000 mon.a (mon.0) 315 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:28.247 INFO:teuthology.orchestra.run.vm03.stdout:Created osd(s) 0 on host 'vm03' 2026-03-09T20:20:28.365 DEBUG:teuthology.orchestra.run.vm03:osd.0> sudo journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@osd.0.service 2026-03-09T20:20:28.366 INFO:tasks.cephadm:Deploying osd.1 on vm04 with /dev/vde... 2026-03-09T20:20:28.366 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- lvm zap /dev/vde 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: cluster 2026-03-09T20:20:25.664244+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: cluster 2026-03-09T20:20:25.664244+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: cluster 2026-03-09T20:20:25.664288+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: cluster 2026-03-09T20:20:25.664288+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: cluster 2026-03-09T20:20:27.212943+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: cluster 2026-03-09T20:20:27.212943+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: audit 2026-03-09T20:20:28.234901+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: audit 2026-03-09T20:20:28.234901+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: audit 2026-03-09T20:20:28.240129+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: audit 2026-03-09T20:20:28.240129+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: audit 2026-03-09T20:20:28.244082+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: audit 2026-03-09T20:20:28.244082+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: cluster 2026-03-09T20:20:28.313646+0000 mon.a (mon.0) 319 : cluster [INF] osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613] boot 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: cluster 2026-03-09T20:20:28.313646+0000 mon.a (mon.0) 319 : cluster [INF] osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613] boot 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: cluster 2026-03-09T20:20:28.313677+0000 mon.a (mon.0) 320 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: cluster 2026-03-09T20:20:28.313677+0000 mon.a (mon.0) 320 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: audit 2026-03-09T20:20:28.313789+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:28 vm03 bash[20708]: audit 2026-03-09T20:20:28.313789+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: cluster 2026-03-09T20:20:25.664244+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: cluster 2026-03-09T20:20:25.664244+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: cluster 2026-03-09T20:20:25.664288+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: cluster 2026-03-09T20:20:25.664288+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: cluster 2026-03-09T20:20:27.212943+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: cluster 2026-03-09T20:20:27.212943+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: audit 2026-03-09T20:20:28.234901+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: audit 2026-03-09T20:20:28.234901+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: audit 2026-03-09T20:20:28.240129+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: audit 2026-03-09T20:20:28.240129+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: audit 2026-03-09T20:20:28.244082+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: audit 2026-03-09T20:20:28.244082+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: cluster 2026-03-09T20:20:28.313646+0000 mon.a (mon.0) 319 : cluster [INF] osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613] boot 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: cluster 2026-03-09T20:20:28.313646+0000 mon.a (mon.0) 319 : cluster [INF] osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613] boot 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: cluster 2026-03-09T20:20:28.313677+0000 mon.a (mon.0) 320 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: cluster 2026-03-09T20:20:28.313677+0000 mon.a (mon.0) 320 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: audit 2026-03-09T20:20:28.313789+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:28.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:28 vm08 bash[23232]: audit 2026-03-09T20:20:28.313789+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:28.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: cluster 2026-03-09T20:20:25.664244+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:20:28.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: cluster 2026-03-09T20:20:25.664244+0000 osd.0 (osd.0) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:20:28.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: cluster 2026-03-09T20:20:25.664288+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:20:28.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: cluster 2026-03-09T20:20:25.664288+0000 osd.0 (osd.0) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:20:28.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: cluster 2026-03-09T20:20:27.212943+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:28.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: cluster 2026-03-09T20:20:27.212943+0000 mgr.a (mgr.14150) 92 : cluster [DBG] pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail 2026-03-09T20:20:28.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: audit 2026-03-09T20:20:28.234901+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:28.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: audit 2026-03-09T20:20:28.234901+0000 mon.a (mon.0) 316 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:28.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: audit 2026-03-09T20:20:28.240129+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:28.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: audit 2026-03-09T20:20:28.240129+0000 mon.a (mon.0) 317 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:28.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: audit 2026-03-09T20:20:28.244082+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:28.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: audit 2026-03-09T20:20:28.244082+0000 mon.a (mon.0) 318 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:28.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: cluster 2026-03-09T20:20:28.313646+0000 mon.a (mon.0) 319 : cluster [INF] osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613] boot 2026-03-09T20:20:28.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: cluster 2026-03-09T20:20:28.313646+0000 mon.a (mon.0) 319 : cluster [INF] osd.0 [v2:192.168.123.103:6802/1560508613,v1:192.168.123.103:6803/1560508613] boot 2026-03-09T20:20:28.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: cluster 2026-03-09T20:20:28.313677+0000 mon.a (mon.0) 320 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T20:20:28.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: cluster 2026-03-09T20:20:28.313677+0000 mon.a (mon.0) 320 : cluster [DBG] osdmap e8: 1 total, 1 up, 1 in 2026-03-09T20:20:28.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: audit 2026-03-09T20:20:28.313789+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:28.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:28 vm04 bash[22793]: audit 2026-03-09T20:20:28.313789+0000 mon.a (mon.0) 321 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:20:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:29 vm03 bash[20708]: cluster 2026-03-09T20:20:29.213263+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:29 vm03 bash[20708]: cluster 2026-03-09T20:20:29.213263+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:29.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:29 vm08 bash[23232]: cluster 2026-03-09T20:20:29.213263+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:29.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:29 vm08 bash[23232]: cluster 2026-03-09T20:20:29.213263+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:29.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:29 vm04 bash[22793]: cluster 2026-03-09T20:20:29.213263+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:29.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:29 vm04 bash[22793]: cluster 2026-03-09T20:20:29.213263+0000 mgr.a (mgr.14150) 93 : cluster [DBG] pgmap v47: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:30 vm03 bash[20708]: cluster 2026-03-09T20:20:29.383827+0000 mon.a (mon.0) 322 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T20:20:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:30 vm03 bash[20708]: cluster 2026-03-09T20:20:29.383827+0000 mon.a (mon.0) 322 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T20:20:30.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:30 vm08 bash[23232]: cluster 2026-03-09T20:20:29.383827+0000 mon.a (mon.0) 322 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T20:20:30.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:30 vm08 bash[23232]: cluster 2026-03-09T20:20:29.383827+0000 mon.a (mon.0) 322 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T20:20:30.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:30 vm04 bash[22793]: cluster 2026-03-09T20:20:29.383827+0000 mon.a (mon.0) 322 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T20:20:30.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:30 vm04 bash[22793]: cluster 2026-03-09T20:20:29.383827+0000 mon.a (mon.0) 322 : cluster [DBG] osdmap e9: 1 total, 1 up, 1 in 2026-03-09T20:20:31.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:31 vm03 bash[20708]: cluster 2026-03-09T20:20:31.213506+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:31 vm03 bash[20708]: cluster 2026-03-09T20:20:31.213506+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:31.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:31 vm08 bash[23232]: cluster 2026-03-09T20:20:31.213506+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:31.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:31 vm08 bash[23232]: cluster 2026-03-09T20:20:31.213506+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:31.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:31 vm04 bash[22793]: cluster 2026-03-09T20:20:31.213506+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:31.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:31 vm04 bash[22793]: cluster 2026-03-09T20:20:31.213506+0000 mgr.a (mgr.14150) 94 : cluster [DBG] pgmap v49: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:32.976 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.b/config 2026-03-09T20:20:33.887 INFO:teuthology.orchestra.run.vm04.stdout: 2026-03-09T20:20:33.900 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph orch daemon add osd vm04:/dev/vde 2026-03-09T20:20:34.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:34 vm03 bash[20708]: cluster 2026-03-09T20:20:33.213733+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:34.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:34 vm03 bash[20708]: cluster 2026-03-09T20:20:33.213733+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:34.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:34 vm03 bash[20708]: audit 2026-03-09T20:20:33.858363+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:34 vm03 bash[20708]: audit 2026-03-09T20:20:33.858363+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:34 vm03 bash[20708]: audit 2026-03-09T20:20:33.863059+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:34 vm03 bash[20708]: audit 2026-03-09T20:20:33.863059+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:34 vm03 bash[20708]: audit 2026-03-09T20:20:33.864021+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:34.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:34 vm03 bash[20708]: audit 2026-03-09T20:20:33.864021+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:34.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:34 vm03 bash[20708]: audit 2026-03-09T20:20:33.865079+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:34.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:34 vm03 bash[20708]: audit 2026-03-09T20:20:33.865079+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:34.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:34 vm03 bash[20708]: audit 2026-03-09T20:20:33.865475+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:34.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:34 vm03 bash[20708]: audit 2026-03-09T20:20:33.865475+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:34.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:34 vm03 bash[20708]: audit 2026-03-09T20:20:33.869984+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:34 vm03 bash[20708]: audit 2026-03-09T20:20:33.869984+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:34 vm08 bash[23232]: cluster 2026-03-09T20:20:33.213733+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:34.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:34 vm08 bash[23232]: cluster 2026-03-09T20:20:33.213733+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:34.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:34 vm08 bash[23232]: audit 2026-03-09T20:20:33.858363+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:34 vm08 bash[23232]: audit 2026-03-09T20:20:33.858363+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:34 vm08 bash[23232]: audit 2026-03-09T20:20:33.863059+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:34 vm08 bash[23232]: audit 2026-03-09T20:20:33.863059+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:34 vm08 bash[23232]: audit 2026-03-09T20:20:33.864021+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:34 vm08 bash[23232]: audit 2026-03-09T20:20:33.864021+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:34 vm08 bash[23232]: audit 2026-03-09T20:20:33.865079+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:34 vm08 bash[23232]: audit 2026-03-09T20:20:33.865079+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:34 vm08 bash[23232]: audit 2026-03-09T20:20:33.865475+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:34 vm08 bash[23232]: audit 2026-03-09T20:20:33.865475+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:34 vm08 bash[23232]: audit 2026-03-09T20:20:33.869984+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:34 vm08 bash[23232]: audit 2026-03-09T20:20:33.869984+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:34 vm04 bash[22793]: cluster 2026-03-09T20:20:33.213733+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:34.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:34 vm04 bash[22793]: cluster 2026-03-09T20:20:33.213733+0000 mgr.a (mgr.14150) 95 : cluster [DBG] pgmap v50: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:34.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:34 vm04 bash[22793]: audit 2026-03-09T20:20:33.858363+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:34 vm04 bash[22793]: audit 2026-03-09T20:20:33.858363+0000 mon.a (mon.0) 323 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:34 vm04 bash[22793]: audit 2026-03-09T20:20:33.863059+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:34 vm04 bash[22793]: audit 2026-03-09T20:20:33.863059+0000 mon.a (mon.0) 324 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:34 vm04 bash[22793]: audit 2026-03-09T20:20:33.864021+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:34 vm04 bash[22793]: audit 2026-03-09T20:20:33.864021+0000 mon.a (mon.0) 325 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:20:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:34 vm04 bash[22793]: audit 2026-03-09T20:20:33.865079+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:34 vm04 bash[22793]: audit 2026-03-09T20:20:33.865079+0000 mon.a (mon.0) 326 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:34 vm04 bash[22793]: audit 2026-03-09T20:20:33.865475+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:34 vm04 bash[22793]: audit 2026-03-09T20:20:33.865475+0000 mon.a (mon.0) 327 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:20:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:34 vm04 bash[22793]: audit 2026-03-09T20:20:33.869984+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:34 vm04 bash[22793]: audit 2026-03-09T20:20:33.869984+0000 mon.a (mon.0) 328 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:35.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:35 vm08 bash[23232]: cephadm 2026-03-09T20:20:33.851946+0000 mgr.a (mgr.14150) 96 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T20:20:35.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:35 vm08 bash[23232]: cephadm 2026-03-09T20:20:33.851946+0000 mgr.a (mgr.14150) 96 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T20:20:35.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:35 vm08 bash[23232]: cephadm 2026-03-09T20:20:33.864356+0000 mgr.a (mgr.14150) 97 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T20:20:35.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:35 vm08 bash[23232]: cephadm 2026-03-09T20:20:33.864356+0000 mgr.a (mgr.14150) 97 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T20:20:35.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:35 vm08 bash[23232]: cephadm 2026-03-09T20:20:33.864758+0000 mgr.a (mgr.14150) 98 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T20:20:35.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:35 vm08 bash[23232]: cephadm 2026-03-09T20:20:33.864758+0000 mgr.a (mgr.14150) 98 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T20:20:35.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:35 vm04 bash[22793]: cephadm 2026-03-09T20:20:33.851946+0000 mgr.a (mgr.14150) 96 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T20:20:35.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:35 vm04 bash[22793]: cephadm 2026-03-09T20:20:33.851946+0000 mgr.a (mgr.14150) 96 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T20:20:35.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:35 vm04 bash[22793]: cephadm 2026-03-09T20:20:33.864356+0000 mgr.a (mgr.14150) 97 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T20:20:35.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:35 vm04 bash[22793]: cephadm 2026-03-09T20:20:33.864356+0000 mgr.a (mgr.14150) 97 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T20:20:35.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:35 vm04 bash[22793]: cephadm 2026-03-09T20:20:33.864758+0000 mgr.a (mgr.14150) 98 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T20:20:35.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:35 vm04 bash[22793]: cephadm 2026-03-09T20:20:33.864758+0000 mgr.a (mgr.14150) 98 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T20:20:35.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:35 vm03 bash[20708]: cephadm 2026-03-09T20:20:33.851946+0000 mgr.a (mgr.14150) 96 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T20:20:35.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:35 vm03 bash[20708]: cephadm 2026-03-09T20:20:33.851946+0000 mgr.a (mgr.14150) 96 : cephadm [INF] Detected new or changed devices on vm03 2026-03-09T20:20:35.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:35 vm03 bash[20708]: cephadm 2026-03-09T20:20:33.864356+0000 mgr.a (mgr.14150) 97 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T20:20:35.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:35 vm03 bash[20708]: cephadm 2026-03-09T20:20:33.864356+0000 mgr.a (mgr.14150) 97 : cephadm [INF] Adjusting osd_memory_target on vm03 to 455.7M 2026-03-09T20:20:35.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:35 vm03 bash[20708]: cephadm 2026-03-09T20:20:33.864758+0000 mgr.a (mgr.14150) 98 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T20:20:35.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:35 vm03 bash[20708]: cephadm 2026-03-09T20:20:33.864758+0000 mgr.a (mgr.14150) 98 : cephadm [WRN] Unable to set osd_memory_target on vm03 to 477921689: error parsing value: Value '477921689' is below minimum 939524096 2026-03-09T20:20:36.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:36 vm08 bash[23232]: cluster 2026-03-09T20:20:35.213934+0000 mgr.a (mgr.14150) 99 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:36.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:36 vm08 bash[23232]: cluster 2026-03-09T20:20:35.213934+0000 mgr.a (mgr.14150) 99 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:36.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:36 vm04 bash[22793]: cluster 2026-03-09T20:20:35.213934+0000 mgr.a (mgr.14150) 99 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:36.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:36 vm04 bash[22793]: cluster 2026-03-09T20:20:35.213934+0000 mgr.a (mgr.14150) 99 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:36.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:36 vm03 bash[20708]: cluster 2026-03-09T20:20:35.213934+0000 mgr.a (mgr.14150) 99 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:36.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:36 vm03 bash[20708]: cluster 2026-03-09T20:20:35.213934+0000 mgr.a (mgr.14150) 99 : cluster [DBG] pgmap v51: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:38.511 INFO:teuthology.orchestra.run.vm04.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.b/config 2026-03-09T20:20:38.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:38 vm08 bash[23232]: cluster 2026-03-09T20:20:37.214138+0000 mgr.a (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:38.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:38 vm08 bash[23232]: cluster 2026-03-09T20:20:37.214138+0000 mgr.a (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:38.559 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:38 vm04 bash[22793]: cluster 2026-03-09T20:20:37.214138+0000 mgr.a (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:38.559 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:38 vm04 bash[22793]: cluster 2026-03-09T20:20:37.214138+0000 mgr.a (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:38.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:38 vm03 bash[20708]: cluster 2026-03-09T20:20:37.214138+0000 mgr.a (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:38.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:38 vm03 bash[20708]: cluster 2026-03-09T20:20:37.214138+0000 mgr.a (mgr.14150) 100 : cluster [DBG] pgmap v52: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:39.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:39 vm08 bash[23232]: audit 2026-03-09T20:20:38.768053+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:39.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:39 vm08 bash[23232]: audit 2026-03-09T20:20:38.768053+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:39.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:39 vm08 bash[23232]: audit 2026-03-09T20:20:38.769212+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:39.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:39 vm08 bash[23232]: audit 2026-03-09T20:20:38.769212+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:39.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:39 vm08 bash[23232]: audit 2026-03-09T20:20:38.769628+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:39.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:39 vm08 bash[23232]: audit 2026-03-09T20:20:38.769628+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:39.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:39 vm04 bash[22793]: audit 2026-03-09T20:20:38.768053+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:39.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:39 vm04 bash[22793]: audit 2026-03-09T20:20:38.768053+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:39.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:39 vm04 bash[22793]: audit 2026-03-09T20:20:38.769212+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:39.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:39 vm04 bash[22793]: audit 2026-03-09T20:20:38.769212+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:39.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:39 vm04 bash[22793]: audit 2026-03-09T20:20:38.769628+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:39.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:39 vm04 bash[22793]: audit 2026-03-09T20:20:38.769628+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:39.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:39 vm03 bash[20708]: audit 2026-03-09T20:20:38.768053+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:39.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:39 vm03 bash[20708]: audit 2026-03-09T20:20:38.768053+0000 mon.a (mon.0) 329 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:20:39.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:39 vm03 bash[20708]: audit 2026-03-09T20:20:38.769212+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:39.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:39 vm03 bash[20708]: audit 2026-03-09T20:20:38.769212+0000 mon.a (mon.0) 330 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:20:39.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:39 vm03 bash[20708]: audit 2026-03-09T20:20:38.769628+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:39.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:39 vm03 bash[20708]: audit 2026-03-09T20:20:38.769628+0000 mon.a (mon.0) 331 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:40.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:40 vm08 bash[23232]: audit 2026-03-09T20:20:38.766544+0000 mgr.a (mgr.14150) 101 : audit [DBG] from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:40.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:40 vm08 bash[23232]: audit 2026-03-09T20:20:38.766544+0000 mgr.a (mgr.14150) 101 : audit [DBG] from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:40.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:40 vm08 bash[23232]: cluster 2026-03-09T20:20:39.214408+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:40.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:40 vm08 bash[23232]: cluster 2026-03-09T20:20:39.214408+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:40.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:40 vm04 bash[22793]: audit 2026-03-09T20:20:38.766544+0000 mgr.a (mgr.14150) 101 : audit [DBG] from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:40.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:40 vm04 bash[22793]: audit 2026-03-09T20:20:38.766544+0000 mgr.a (mgr.14150) 101 : audit [DBG] from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:40.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:40 vm04 bash[22793]: cluster 2026-03-09T20:20:39.214408+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:40.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:40 vm04 bash[22793]: cluster 2026-03-09T20:20:39.214408+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:40.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:40 vm03 bash[20708]: audit 2026-03-09T20:20:38.766544+0000 mgr.a (mgr.14150) 101 : audit [DBG] from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:40.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:40 vm03 bash[20708]: audit 2026-03-09T20:20:38.766544+0000 mgr.a (mgr.14150) 101 : audit [DBG] from='client.24131 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm04:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:20:40.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:40 vm03 bash[20708]: cluster 2026-03-09T20:20:39.214408+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:40.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:40 vm03 bash[20708]: cluster 2026-03-09T20:20:39.214408+0000 mgr.a (mgr.14150) 102 : cluster [DBG] pgmap v53: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:42.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:42 vm08 bash[23232]: cluster 2026-03-09T20:20:41.214627+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:42.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:42 vm08 bash[23232]: cluster 2026-03-09T20:20:41.214627+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:42.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:42 vm04 bash[22793]: cluster 2026-03-09T20:20:41.214627+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:42.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:42 vm04 bash[22793]: cluster 2026-03-09T20:20:41.214627+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:42.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:42 vm03 bash[20708]: cluster 2026-03-09T20:20:41.214627+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:42.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:42 vm03 bash[20708]: cluster 2026-03-09T20:20:41.214627+0000 mgr.a (mgr.14150) 103 : cluster [DBG] pgmap v54: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:44.588 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:44 vm04 bash[22793]: cluster 2026-03-09T20:20:43.214870+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:44.588 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:44 vm04 bash[22793]: cluster 2026-03-09T20:20:43.214870+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:44.588 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:44 vm04 bash[22793]: audit 2026-03-09T20:20:44.233336+0000 mon.a (mon.0) 332 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]: dispatch 2026-03-09T20:20:44.588 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:44 vm04 bash[22793]: audit 2026-03-09T20:20:44.233336+0000 mon.a (mon.0) 332 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]: dispatch 2026-03-09T20:20:44.588 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:44 vm04 bash[22793]: audit 2026-03-09T20:20:44.234695+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.104:0/815863044' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]: dispatch 2026-03-09T20:20:44.588 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:44 vm04 bash[22793]: audit 2026-03-09T20:20:44.234695+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.104:0/815863044' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]: dispatch 2026-03-09T20:20:44.588 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:44 vm04 bash[22793]: audit 2026-03-09T20:20:44.237404+0000 mon.a (mon.0) 333 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]': finished 2026-03-09T20:20:44.588 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:44 vm04 bash[22793]: audit 2026-03-09T20:20:44.237404+0000 mon.a (mon.0) 333 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]': finished 2026-03-09T20:20:44.588 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:44 vm04 bash[22793]: cluster 2026-03-09T20:20:44.240576+0000 mon.a (mon.0) 334 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T20:20:44.588 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:44 vm04 bash[22793]: cluster 2026-03-09T20:20:44.240576+0000 mon.a (mon.0) 334 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T20:20:44.588 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:44 vm04 bash[22793]: audit 2026-03-09T20:20:44.240705+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:44.588 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:44 vm04 bash[22793]: audit 2026-03-09T20:20:44.240705+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:44.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:44 vm03 bash[20708]: cluster 2026-03-09T20:20:43.214870+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:44 vm03 bash[20708]: cluster 2026-03-09T20:20:43.214870+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:44 vm03 bash[20708]: audit 2026-03-09T20:20:44.233336+0000 mon.a (mon.0) 332 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]: dispatch 2026-03-09T20:20:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:44 vm03 bash[20708]: audit 2026-03-09T20:20:44.233336+0000 mon.a (mon.0) 332 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]: dispatch 2026-03-09T20:20:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:44 vm03 bash[20708]: audit 2026-03-09T20:20:44.234695+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.104:0/815863044' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]: dispatch 2026-03-09T20:20:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:44 vm03 bash[20708]: audit 2026-03-09T20:20:44.234695+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.104:0/815863044' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]: dispatch 2026-03-09T20:20:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:44 vm03 bash[20708]: audit 2026-03-09T20:20:44.237404+0000 mon.a (mon.0) 333 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]': finished 2026-03-09T20:20:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:44 vm03 bash[20708]: audit 2026-03-09T20:20:44.237404+0000 mon.a (mon.0) 333 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]': finished 2026-03-09T20:20:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:44 vm03 bash[20708]: cluster 2026-03-09T20:20:44.240576+0000 mon.a (mon.0) 334 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T20:20:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:44 vm03 bash[20708]: cluster 2026-03-09T20:20:44.240576+0000 mon.a (mon.0) 334 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T20:20:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:44 vm03 bash[20708]: audit 2026-03-09T20:20:44.240705+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:44 vm03 bash[20708]: audit 2026-03-09T20:20:44.240705+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:44 vm08 bash[23232]: cluster 2026-03-09T20:20:43.214870+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:44 vm08 bash[23232]: cluster 2026-03-09T20:20:43.214870+0000 mgr.a (mgr.14150) 104 : cluster [DBG] pgmap v55: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:44 vm08 bash[23232]: audit 2026-03-09T20:20:44.233336+0000 mon.a (mon.0) 332 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]: dispatch 2026-03-09T20:20:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:44 vm08 bash[23232]: audit 2026-03-09T20:20:44.233336+0000 mon.a (mon.0) 332 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]: dispatch 2026-03-09T20:20:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:44 vm08 bash[23232]: audit 2026-03-09T20:20:44.234695+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.104:0/815863044' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]: dispatch 2026-03-09T20:20:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:44 vm08 bash[23232]: audit 2026-03-09T20:20:44.234695+0000 mon.c (mon.1) 7 : audit [INF] from='client.? 192.168.123.104:0/815863044' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]: dispatch 2026-03-09T20:20:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:44 vm08 bash[23232]: audit 2026-03-09T20:20:44.237404+0000 mon.a (mon.0) 333 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]': finished 2026-03-09T20:20:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:44 vm08 bash[23232]: audit 2026-03-09T20:20:44.237404+0000 mon.a (mon.0) 333 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3eb69c4e-b9de-4a57-b23c-633c67090f8d"}]': finished 2026-03-09T20:20:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:44 vm08 bash[23232]: cluster 2026-03-09T20:20:44.240576+0000 mon.a (mon.0) 334 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T20:20:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:44 vm08 bash[23232]: cluster 2026-03-09T20:20:44.240576+0000 mon.a (mon.0) 334 : cluster [DBG] osdmap e10: 2 total, 1 up, 2 in 2026-03-09T20:20:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:44 vm08 bash[23232]: audit 2026-03-09T20:20:44.240705+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:44.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:44 vm08 bash[23232]: audit 2026-03-09T20:20:44.240705+0000 mon.a (mon.0) 335 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:45.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:45 vm04 bash[22793]: audit 2026-03-09T20:20:44.824744+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.104:0/1694205835' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:45.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:45 vm04 bash[22793]: audit 2026-03-09T20:20:44.824744+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.104:0/1694205835' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:45.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:45 vm03 bash[20708]: audit 2026-03-09T20:20:44.824744+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.104:0/1694205835' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:45.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:45 vm03 bash[20708]: audit 2026-03-09T20:20:44.824744+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.104:0/1694205835' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:45.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:45 vm08 bash[23232]: audit 2026-03-09T20:20:44.824744+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.104:0/1694205835' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:45.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:45 vm08 bash[23232]: audit 2026-03-09T20:20:44.824744+0000 mon.c (mon.1) 8 : audit [DBG] from='client.? 192.168.123.104:0/1694205835' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:20:46.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:46 vm04 bash[22793]: cluster 2026-03-09T20:20:45.215160+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:46.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:46 vm04 bash[22793]: cluster 2026-03-09T20:20:45.215160+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:46.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:46 vm03 bash[20708]: cluster 2026-03-09T20:20:45.215160+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:46.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:46 vm03 bash[20708]: cluster 2026-03-09T20:20:45.215160+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:46.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:46 vm08 bash[23232]: cluster 2026-03-09T20:20:45.215160+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:46.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:46 vm08 bash[23232]: cluster 2026-03-09T20:20:45.215160+0000 mgr.a (mgr.14150) 105 : cluster [DBG] pgmap v57: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:48.324 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:48 vm04 bash[22793]: cluster 2026-03-09T20:20:47.215381+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:48.324 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:48 vm04 bash[22793]: cluster 2026-03-09T20:20:47.215381+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:48.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:48 vm03 bash[20708]: cluster 2026-03-09T20:20:47.215381+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:48.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:48 vm03 bash[20708]: cluster 2026-03-09T20:20:47.215381+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:48.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:48 vm08 bash[23232]: cluster 2026-03-09T20:20:47.215381+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:48.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:48 vm08 bash[23232]: cluster 2026-03-09T20:20:47.215381+0000 mgr.a (mgr.14150) 106 : cluster [DBG] pgmap v58: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:50.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:50 vm04 bash[22793]: cluster 2026-03-09T20:20:49.215640+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:50.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:50 vm04 bash[22793]: cluster 2026-03-09T20:20:49.215640+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:50.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:50 vm03 bash[20708]: cluster 2026-03-09T20:20:49.215640+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:50.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:50 vm03 bash[20708]: cluster 2026-03-09T20:20:49.215640+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:50.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:50 vm08 bash[23232]: cluster 2026-03-09T20:20:49.215640+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:50.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:50 vm08 bash[23232]: cluster 2026-03-09T20:20:49.215640+0000 mgr.a (mgr.14150) 107 : cluster [DBG] pgmap v59: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:52.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:52 vm04 bash[22793]: cluster 2026-03-09T20:20:51.215862+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:52.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:52 vm04 bash[22793]: cluster 2026-03-09T20:20:51.215862+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:52.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:52 vm03 bash[20708]: cluster 2026-03-09T20:20:51.215862+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:52.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:52 vm03 bash[20708]: cluster 2026-03-09T20:20:51.215862+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:52.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:52 vm08 bash[23232]: cluster 2026-03-09T20:20:51.215862+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:52.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:52 vm08 bash[23232]: cluster 2026-03-09T20:20:51.215862+0000 mgr.a (mgr.14150) 108 : cluster [DBG] pgmap v60: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:53.500 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:53 vm04 bash[22793]: audit 2026-03-09T20:20:53.238823+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:20:53.500 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:53 vm04 bash[22793]: audit 2026-03-09T20:20:53.238823+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:20:53.500 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:53 vm04 bash[22793]: audit 2026-03-09T20:20:53.239358+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:53.500 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:53 vm04 bash[22793]: audit 2026-03-09T20:20:53.239358+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:53.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:53 vm03 bash[20708]: audit 2026-03-09T20:20:53.238823+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:20:53.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:53 vm03 bash[20708]: audit 2026-03-09T20:20:53.238823+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:20:53.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:53 vm03 bash[20708]: audit 2026-03-09T20:20:53.239358+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:53.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:53 vm03 bash[20708]: audit 2026-03-09T20:20:53.239358+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:53.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:53 vm08 bash[23232]: audit 2026-03-09T20:20:53.238823+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:20:53.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:53 vm08 bash[23232]: audit 2026-03-09T20:20:53.238823+0000 mon.a (mon.0) 336 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch 2026-03-09T20:20:53.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:53 vm08 bash[23232]: audit 2026-03-09T20:20:53.239358+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:53.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:53 vm08 bash[23232]: audit 2026-03-09T20:20:53.239358+0000 mon.a (mon.0) 337 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:20:54.043 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:53 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:20:54.044 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:20:53 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:20:54.342 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:54 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:20:54.342 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:20:54 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:20:54.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:54 vm04 bash[22793]: cluster 2026-03-09T20:20:53.216062+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:54.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:54 vm04 bash[22793]: cluster 2026-03-09T20:20:53.216062+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:54.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:54 vm04 bash[22793]: cephadm 2026-03-09T20:20:53.239741+0000 mgr.a (mgr.14150) 110 : cephadm [INF] Deploying daemon osd.1 on vm04 2026-03-09T20:20:54.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:54 vm04 bash[22793]: cephadm 2026-03-09T20:20:53.239741+0000 mgr.a (mgr.14150) 110 : cephadm [INF] Deploying daemon osd.1 on vm04 2026-03-09T20:20:54.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:54 vm04 bash[22793]: audit 2026-03-09T20:20:54.267601+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:54.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:54 vm04 bash[22793]: audit 2026-03-09T20:20:54.267601+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:54.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:54 vm04 bash[22793]: audit 2026-03-09T20:20:54.272844+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:54.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:54 vm04 bash[22793]: audit 2026-03-09T20:20:54.272844+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:54.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:54 vm04 bash[22793]: audit 2026-03-09T20:20:54.276548+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:54.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:54 vm04 bash[22793]: audit 2026-03-09T20:20:54.276548+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:54 vm03 bash[20708]: cluster 2026-03-09T20:20:53.216062+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:54 vm03 bash[20708]: cluster 2026-03-09T20:20:53.216062+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:54 vm03 bash[20708]: cephadm 2026-03-09T20:20:53.239741+0000 mgr.a (mgr.14150) 110 : cephadm [INF] Deploying daemon osd.1 on vm04 2026-03-09T20:20:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:54 vm03 bash[20708]: cephadm 2026-03-09T20:20:53.239741+0000 mgr.a (mgr.14150) 110 : cephadm [INF] Deploying daemon osd.1 on vm04 2026-03-09T20:20:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:54 vm03 bash[20708]: audit 2026-03-09T20:20:54.267601+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:54 vm03 bash[20708]: audit 2026-03-09T20:20:54.267601+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:54 vm03 bash[20708]: audit 2026-03-09T20:20:54.272844+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:54 vm03 bash[20708]: audit 2026-03-09T20:20:54.272844+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:54 vm03 bash[20708]: audit 2026-03-09T20:20:54.276548+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:54 vm03 bash[20708]: audit 2026-03-09T20:20:54.276548+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:54.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:54 vm08 bash[23232]: cluster 2026-03-09T20:20:53.216062+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:54.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:54 vm08 bash[23232]: cluster 2026-03-09T20:20:53.216062+0000 mgr.a (mgr.14150) 109 : cluster [DBG] pgmap v61: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:54.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:54 vm08 bash[23232]: cephadm 2026-03-09T20:20:53.239741+0000 mgr.a (mgr.14150) 110 : cephadm [INF] Deploying daemon osd.1 on vm04 2026-03-09T20:20:54.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:54 vm08 bash[23232]: cephadm 2026-03-09T20:20:53.239741+0000 mgr.a (mgr.14150) 110 : cephadm [INF] Deploying daemon osd.1 on vm04 2026-03-09T20:20:54.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:54 vm08 bash[23232]: audit 2026-03-09T20:20:54.267601+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:54.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:54 vm08 bash[23232]: audit 2026-03-09T20:20:54.267601+0000 mon.a (mon.0) 338 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:20:54.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:54 vm08 bash[23232]: audit 2026-03-09T20:20:54.272844+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:54.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:54 vm08 bash[23232]: audit 2026-03-09T20:20:54.272844+0000 mon.a (mon.0) 339 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:54.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:54 vm08 bash[23232]: audit 2026-03-09T20:20:54.276548+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:54.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:54 vm08 bash[23232]: audit 2026-03-09T20:20:54.276548+0000 mon.a (mon.0) 340 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:20:55.739 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:55 vm04 bash[22793]: cluster 2026-03-09T20:20:55.216306+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:55.739 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:55 vm04 bash[22793]: cluster 2026-03-09T20:20:55.216306+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:55.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:55 vm08 bash[23232]: cluster 2026-03-09T20:20:55.216306+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:55.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:55 vm08 bash[23232]: cluster 2026-03-09T20:20:55.216306+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:55.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:55 vm03 bash[20708]: cluster 2026-03-09T20:20:55.216306+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:55.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:55 vm03 bash[20708]: cluster 2026-03-09T20:20:55.216306+0000 mgr.a (mgr.14150) 111 : cluster [DBG] pgmap v62: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:58.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:58 vm04 bash[22793]: cluster 2026-03-09T20:20:57.216525+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:58.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:58 vm04 bash[22793]: cluster 2026-03-09T20:20:57.216525+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:58.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:58 vm04 bash[22793]: audit 2026-03-09T20:20:57.589027+0000 mon.b (mon.2) 2 : audit [INF] from='osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:20:58.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:58 vm04 bash[22793]: audit 2026-03-09T20:20:57.589027+0000 mon.b (mon.2) 2 : audit [INF] from='osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:20:58.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:58 vm04 bash[22793]: audit 2026-03-09T20:20:57.591662+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:20:58.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:58 vm04 bash[22793]: audit 2026-03-09T20:20:57.591662+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:20:58.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:58 vm03 bash[20708]: cluster 2026-03-09T20:20:57.216525+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:58.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:58 vm03 bash[20708]: cluster 2026-03-09T20:20:57.216525+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:58.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:58 vm03 bash[20708]: audit 2026-03-09T20:20:57.589027+0000 mon.b (mon.2) 2 : audit [INF] from='osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:20:58.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:58 vm03 bash[20708]: audit 2026-03-09T20:20:57.589027+0000 mon.b (mon.2) 2 : audit [INF] from='osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:20:58.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:58 vm03 bash[20708]: audit 2026-03-09T20:20:57.591662+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:20:58.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:58 vm03 bash[20708]: audit 2026-03-09T20:20:57.591662+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:20:58.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:58 vm08 bash[23232]: cluster 2026-03-09T20:20:57.216525+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:58.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:58 vm08 bash[23232]: cluster 2026-03-09T20:20:57.216525+0000 mgr.a (mgr.14150) 112 : cluster [DBG] pgmap v63: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:20:58.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:58 vm08 bash[23232]: audit 2026-03-09T20:20:57.589027+0000 mon.b (mon.2) 2 : audit [INF] from='osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:20:58.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:58 vm08 bash[23232]: audit 2026-03-09T20:20:57.589027+0000 mon.b (mon.2) 2 : audit [INF] from='osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:20:58.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:58 vm08 bash[23232]: audit 2026-03-09T20:20:57.591662+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:20:58.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:58 vm08 bash[23232]: audit 2026-03-09T20:20:57.591662+0000 mon.a (mon.0) 341 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch 2026-03-09T20:20:59.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:59 vm08 bash[23232]: audit 2026-03-09T20:20:58.340981+0000 mon.a (mon.0) 342 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T20:20:59.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:59 vm08 bash[23232]: audit 2026-03-09T20:20:58.340981+0000 mon.a (mon.0) 342 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T20:20:59.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:59 vm08 bash[23232]: cluster 2026-03-09T20:20:58.343258+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T20:20:59.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:59 vm08 bash[23232]: cluster 2026-03-09T20:20:58.343258+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T20:20:59.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:59 vm08 bash[23232]: audit 2026-03-09T20:20:58.343356+0000 mon.b (mon.2) 3 : audit [INF] from='osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T20:20:59.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:59 vm08 bash[23232]: audit 2026-03-09T20:20:58.343356+0000 mon.b (mon.2) 3 : audit [INF] from='osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T20:20:59.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:59 vm08 bash[23232]: audit 2026-03-09T20:20:58.343855+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:59.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:59 vm08 bash[23232]: audit 2026-03-09T20:20:58.343855+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:59.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:59 vm08 bash[23232]: audit 2026-03-09T20:20:58.349942+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T20:20:59.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:20:59 vm08 bash[23232]: audit 2026-03-09T20:20:58.349942+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T20:20:59.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:59 vm04 bash[22793]: audit 2026-03-09T20:20:58.340981+0000 mon.a (mon.0) 342 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T20:20:59.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:59 vm04 bash[22793]: audit 2026-03-09T20:20:58.340981+0000 mon.a (mon.0) 342 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T20:20:59.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:59 vm04 bash[22793]: cluster 2026-03-09T20:20:58.343258+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T20:20:59.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:59 vm04 bash[22793]: cluster 2026-03-09T20:20:58.343258+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T20:20:59.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:59 vm04 bash[22793]: audit 2026-03-09T20:20:58.343356+0000 mon.b (mon.2) 3 : audit [INF] from='osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T20:20:59.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:59 vm04 bash[22793]: audit 2026-03-09T20:20:58.343356+0000 mon.b (mon.2) 3 : audit [INF] from='osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T20:20:59.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:59 vm04 bash[22793]: audit 2026-03-09T20:20:58.343855+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:59.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:59 vm04 bash[22793]: audit 2026-03-09T20:20:58.343855+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:59.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:59 vm04 bash[22793]: audit 2026-03-09T20:20:58.349942+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T20:20:59.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:20:59 vm04 bash[22793]: audit 2026-03-09T20:20:58.349942+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T20:20:59.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:59 vm03 bash[20708]: audit 2026-03-09T20:20:58.340981+0000 mon.a (mon.0) 342 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T20:20:59.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:59 vm03 bash[20708]: audit 2026-03-09T20:20:58.340981+0000 mon.a (mon.0) 342 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished 2026-03-09T20:20:59.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:59 vm03 bash[20708]: cluster 2026-03-09T20:20:58.343258+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T20:20:59.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:59 vm03 bash[20708]: cluster 2026-03-09T20:20:58.343258+0000 mon.a (mon.0) 343 : cluster [DBG] osdmap e11: 2 total, 1 up, 2 in 2026-03-09T20:20:59.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:59 vm03 bash[20708]: audit 2026-03-09T20:20:58.343356+0000 mon.b (mon.2) 3 : audit [INF] from='osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T20:20:59.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:59 vm03 bash[20708]: audit 2026-03-09T20:20:58.343356+0000 mon.b (mon.2) 3 : audit [INF] from='osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T20:20:59.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:59 vm03 bash[20708]: audit 2026-03-09T20:20:58.343855+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:59.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:59 vm03 bash[20708]: audit 2026-03-09T20:20:58.343855+0000 mon.a (mon.0) 344 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:20:59.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:59 vm03 bash[20708]: audit 2026-03-09T20:20:58.349942+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T20:20:59.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:20:59 vm03 bash[20708]: audit 2026-03-09T20:20:58.349942+0000 mon.a (mon.0) 345 : audit [INF] from='osd.1 ' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]: dispatch 2026-03-09T20:21:00.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:00 vm04 bash[22793]: cluster 2026-03-09T20:20:59.216757+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:21:00.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:00 vm04 bash[22793]: cluster 2026-03-09T20:20:59.216757+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:21:00.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:00 vm04 bash[22793]: audit 2026-03-09T20:20:59.413336+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T20:21:00.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:00 vm04 bash[22793]: audit 2026-03-09T20:20:59.413336+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T20:21:00.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:00 vm04 bash[22793]: cluster 2026-03-09T20:20:59.415641+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T20:21:00.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:00 vm04 bash[22793]: cluster 2026-03-09T20:20:59.415641+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T20:21:00.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:00 vm04 bash[22793]: audit 2026-03-09T20:20:59.417258+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:00.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:00 vm04 bash[22793]: audit 2026-03-09T20:20:59.417258+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:00.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:00 vm04 bash[22793]: audit 2026-03-09T20:20:59.424871+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:00.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:00 vm04 bash[22793]: audit 2026-03-09T20:20:59.424871+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:00.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:00 vm08 bash[23232]: cluster 2026-03-09T20:20:59.216757+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:21:00.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:00 vm08 bash[23232]: cluster 2026-03-09T20:20:59.216757+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:21:00.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:00 vm08 bash[23232]: audit 2026-03-09T20:20:59.413336+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T20:21:00.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:00 vm08 bash[23232]: audit 2026-03-09T20:20:59.413336+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T20:21:00.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:00 vm08 bash[23232]: cluster 2026-03-09T20:20:59.415641+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T20:21:00.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:00 vm08 bash[23232]: cluster 2026-03-09T20:20:59.415641+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T20:21:00.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:00 vm08 bash[23232]: audit 2026-03-09T20:20:59.417258+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:00.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:00 vm08 bash[23232]: audit 2026-03-09T20:20:59.417258+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:00.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:00 vm08 bash[23232]: audit 2026-03-09T20:20:59.424871+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:00.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:00 vm08 bash[23232]: audit 2026-03-09T20:20:59.424871+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:00 vm03 bash[20708]: cluster 2026-03-09T20:20:59.216757+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:21:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:00 vm03 bash[20708]: cluster 2026-03-09T20:20:59.216757+0000 mgr.a (mgr.14150) 113 : cluster [DBG] pgmap v65: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:21:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:00 vm03 bash[20708]: audit 2026-03-09T20:20:59.413336+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T20:21:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:00 vm03 bash[20708]: audit 2026-03-09T20:20:59.413336+0000 mon.a (mon.0) 346 : audit [INF] from='osd.1 ' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=vm04", "root=default"]}]': finished 2026-03-09T20:21:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:00 vm03 bash[20708]: cluster 2026-03-09T20:20:59.415641+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T20:21:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:00 vm03 bash[20708]: cluster 2026-03-09T20:20:59.415641+0000 mon.a (mon.0) 347 : cluster [DBG] osdmap e12: 2 total, 1 up, 2 in 2026-03-09T20:21:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:00 vm03 bash[20708]: audit 2026-03-09T20:20:59.417258+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:00 vm03 bash[20708]: audit 2026-03-09T20:20:59.417258+0000 mon.a (mon.0) 348 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:00 vm03 bash[20708]: audit 2026-03-09T20:20:59.424871+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:00 vm03 bash[20708]: audit 2026-03-09T20:20:59.424871+0000 mon.a (mon.0) 349 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:01.428 INFO:teuthology.orchestra.run.vm04.stdout:Created osd(s) 1 on host 'vm04' 2026-03-09T20:21:01.504 DEBUG:teuthology.orchestra.run.vm04:osd.1> sudo journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@osd.1.service 2026-03-09T20:21:01.505 INFO:tasks.cephadm:Deploying osd.2 on vm08 with /dev/vde... 2026-03-09T20:21:01.505 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df ceph-volume -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- lvm zap /dev/vde 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: cluster 2026-03-09T20:20:58.548995+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: cluster 2026-03-09T20:20:58.548995+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: cluster 2026-03-09T20:20:58.549040+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: cluster 2026-03-09T20:20:58.549040+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:00.423063+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:00.423063+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: cluster 2026-03-09T20:21:00.426840+0000 mon.a (mon.0) 351 : cluster [INF] osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990] boot 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: cluster 2026-03-09T20:21:00.426840+0000 mon.a (mon.0) 351 : cluster [INF] osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990] boot 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: cluster 2026-03-09T20:21:00.426905+0000 mon.a (mon.0) 352 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: cluster 2026-03-09T20:21:00.426905+0000 mon.a (mon.0) 352 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:00.427738+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:00.427738+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:00.476646+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:00.476646+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:00.480056+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:00.480056+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:00.480629+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:00.480629+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:00.481083+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:00.481083+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:00.484391+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:00.484391+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: cluster 2026-03-09T20:21:01.216967+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: cluster 2026-03-09T20:21:01.216967+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:01.414020+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:01.414020+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:01.419611+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:01.419611+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:01.423724+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:01 vm04 bash[22793]: audit 2026-03-09T20:21:01.423724+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: cluster 2026-03-09T20:20:58.548995+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: cluster 2026-03-09T20:20:58.548995+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: cluster 2026-03-09T20:20:58.549040+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: cluster 2026-03-09T20:20:58.549040+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:00.423063+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:00.423063+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: cluster 2026-03-09T20:21:00.426840+0000 mon.a (mon.0) 351 : cluster [INF] osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990] boot 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: cluster 2026-03-09T20:21:00.426840+0000 mon.a (mon.0) 351 : cluster [INF] osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990] boot 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: cluster 2026-03-09T20:21:00.426905+0000 mon.a (mon.0) 352 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: cluster 2026-03-09T20:21:00.426905+0000 mon.a (mon.0) 352 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:00.427738+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:00.427738+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:00.476646+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:00.476646+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:00.480056+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:00.480056+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:00.480629+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:00.480629+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:00.481083+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:00.481083+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:00.484391+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:00.484391+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: cluster 2026-03-09T20:21:01.216967+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:21:01.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: cluster 2026-03-09T20:21:01.216967+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:21:01.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:01.414020+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:01.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:01.414020+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:01.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:01.419611+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:01.419611+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:01.423724+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:01 vm08 bash[23232]: audit 2026-03-09T20:21:01.423724+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: cluster 2026-03-09T20:20:58.548995+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: cluster 2026-03-09T20:20:58.548995+0000 osd.1 (osd.1) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: cluster 2026-03-09T20:20:58.549040+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: cluster 2026-03-09T20:20:58.549040+0000 osd.1 (osd.1) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:00.423063+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:00.423063+0000 mon.a (mon.0) 350 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: cluster 2026-03-09T20:21:00.426840+0000 mon.a (mon.0) 351 : cluster [INF] osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990] boot 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: cluster 2026-03-09T20:21:00.426840+0000 mon.a (mon.0) 351 : cluster [INF] osd.1 [v2:192.168.123.104:6800/381841990,v1:192.168.123.104:6801/381841990] boot 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: cluster 2026-03-09T20:21:00.426905+0000 mon.a (mon.0) 352 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: cluster 2026-03-09T20:21:00.426905+0000 mon.a (mon.0) 352 : cluster [DBG] osdmap e13: 2 total, 2 up, 2 in 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:00.427738+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:00.427738+0000 mon.a (mon.0) 353 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:00.476646+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:00.476646+0000 mon.a (mon.0) 354 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:00.480056+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:00.480056+0000 mon.a (mon.0) 355 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:00.480629+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:00.480629+0000 mon.a (mon.0) 356 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:00.481083+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:00.481083+0000 mon.a (mon.0) 357 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:00.484391+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:00.484391+0000 mon.a (mon.0) 358 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: cluster 2026-03-09T20:21:01.216967+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:21:01.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: cluster 2026-03-09T20:21:01.216967+0000 mgr.a (mgr.14150) 114 : cluster [DBG] pgmap v68: 0 pgs: ; 0 B data, 26 MiB used, 20 GiB / 20 GiB avail 2026-03-09T20:21:01.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:01.414020+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:01.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:01.414020+0000 mon.a (mon.0) 359 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:01.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:01.419611+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:01.419611+0000 mon.a (mon.0) 360 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:01.423724+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:01.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:01 vm03 bash[20708]: audit 2026-03-09T20:21:01.423724+0000 mon.a (mon.0) 361 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:02.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:02 vm08 bash[23232]: cluster 2026-03-09T20:21:01.497085+0000 mon.a (mon.0) 362 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T20:21:02.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:02 vm08 bash[23232]: cluster 2026-03-09T20:21:01.497085+0000 mon.a (mon.0) 362 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T20:21:02.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:02 vm04 bash[22793]: cluster 2026-03-09T20:21:01.497085+0000 mon.a (mon.0) 362 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T20:21:02.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:02 vm04 bash[22793]: cluster 2026-03-09T20:21:01.497085+0000 mon.a (mon.0) 362 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T20:21:02.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:02 vm03 bash[20708]: cluster 2026-03-09T20:21:01.497085+0000 mon.a (mon.0) 362 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T20:21:02.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:02 vm03 bash[20708]: cluster 2026-03-09T20:21:01.497085+0000 mon.a (mon.0) 362 : cluster [DBG] osdmap e14: 2 total, 2 up, 2 in 2026-03-09T20:21:03.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:03 vm08 bash[23232]: cluster 2026-03-09T20:21:03.217228+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:03.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:03 vm08 bash[23232]: cluster 2026-03-09T20:21:03.217228+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:03.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:03 vm04 bash[22793]: cluster 2026-03-09T20:21:03.217228+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:03.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:03 vm04 bash[22793]: cluster 2026-03-09T20:21:03.217228+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:03.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:03 vm03 bash[20708]: cluster 2026-03-09T20:21:03.217228+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:03.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:03 vm03 bash[20708]: cluster 2026-03-09T20:21:03.217228+0000 mgr.a (mgr.14150) 115 : cluster [DBG] pgmap v70: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:05.116 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.c/config 2026-03-09T20:21:05.962 INFO:teuthology.orchestra.run.vm08.stdout: 2026-03-09T20:21:05.974 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph orch daemon add osd vm08:/dev/vde 2026-03-09T20:21:06.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:06 vm04 bash[22793]: cluster 2026-03-09T20:21:05.217516+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:06.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:06 vm04 bash[22793]: cluster 2026-03-09T20:21:05.217516+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:06.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:06 vm03 bash[20708]: cluster 2026-03-09T20:21:05.217516+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:06.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:06 vm03 bash[20708]: cluster 2026-03-09T20:21:05.217516+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:06.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:06 vm08 bash[23232]: cluster 2026-03-09T20:21:05.217516+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:06.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:06 vm08 bash[23232]: cluster 2026-03-09T20:21:05.217516+0000 mgr.a (mgr.14150) 116 : cluster [DBG] pgmap v71: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: cephadm 2026-03-09T20:21:07.032248+0000 mgr.a (mgr.14150) 117 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: cephadm 2026-03-09T20:21:07.032248+0000 mgr.a (mgr.14150) 117 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: audit 2026-03-09T20:21:07.038509+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: audit 2026-03-09T20:21:07.038509+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: audit 2026-03-09T20:21:07.043257+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: audit 2026-03-09T20:21:07.043257+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: audit 2026-03-09T20:21:07.044329+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: audit 2026-03-09T20:21:07.044329+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: cephadm 2026-03-09T20:21:07.044719+0000 mgr.a (mgr.14150) 118 : cephadm [INF] Adjusting osd_memory_target on vm04 to 455.7M 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: cephadm 2026-03-09T20:21:07.044719+0000 mgr.a (mgr.14150) 118 : cephadm [INF] Adjusting osd_memory_target on vm04 to 455.7M 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: cephadm 2026-03-09T20:21:07.045117+0000 mgr.a (mgr.14150) 119 : cephadm [WRN] Unable to set osd_memory_target on vm04 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: cephadm 2026-03-09T20:21:07.045117+0000 mgr.a (mgr.14150) 119 : cephadm [WRN] Unable to set osd_memory_target on vm04 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: audit 2026-03-09T20:21:07.045375+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: audit 2026-03-09T20:21:07.045375+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: audit 2026-03-09T20:21:07.045783+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: audit 2026-03-09T20:21:07.045783+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: audit 2026-03-09T20:21:07.049820+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: audit 2026-03-09T20:21:07.049820+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: cluster 2026-03-09T20:21:07.217761+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:08.306 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:08 vm08 bash[23232]: cluster 2026-03-09T20:21:07.217761+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:08.369 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: cephadm 2026-03-09T20:21:07.032248+0000 mgr.a (mgr.14150) 117 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: cephadm 2026-03-09T20:21:07.032248+0000 mgr.a (mgr.14150) 117 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: audit 2026-03-09T20:21:07.038509+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: audit 2026-03-09T20:21:07.038509+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: audit 2026-03-09T20:21:07.043257+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: audit 2026-03-09T20:21:07.043257+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: audit 2026-03-09T20:21:07.044329+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: audit 2026-03-09T20:21:07.044329+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: cephadm 2026-03-09T20:21:07.044719+0000 mgr.a (mgr.14150) 118 : cephadm [INF] Adjusting osd_memory_target on vm04 to 455.7M 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: cephadm 2026-03-09T20:21:07.044719+0000 mgr.a (mgr.14150) 118 : cephadm [INF] Adjusting osd_memory_target on vm04 to 455.7M 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: cephadm 2026-03-09T20:21:07.045117+0000 mgr.a (mgr.14150) 119 : cephadm [WRN] Unable to set osd_memory_target on vm04 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: cephadm 2026-03-09T20:21:07.045117+0000 mgr.a (mgr.14150) 119 : cephadm [WRN] Unable to set osd_memory_target on vm04 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: audit 2026-03-09T20:21:07.045375+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: audit 2026-03-09T20:21:07.045375+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: audit 2026-03-09T20:21:07.045783+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: audit 2026-03-09T20:21:07.045783+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: audit 2026-03-09T20:21:07.049820+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: audit 2026-03-09T20:21:07.049820+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: cluster 2026-03-09T20:21:07.217761+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:08.370 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:08 vm04 bash[22793]: cluster 2026-03-09T20:21:07.217761+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: cephadm 2026-03-09T20:21:07.032248+0000 mgr.a (mgr.14150) 117 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: cephadm 2026-03-09T20:21:07.032248+0000 mgr.a (mgr.14150) 117 : cephadm [INF] Detected new or changed devices on vm04 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: audit 2026-03-09T20:21:07.038509+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: audit 2026-03-09T20:21:07.038509+0000 mon.a (mon.0) 363 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: audit 2026-03-09T20:21:07.043257+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: audit 2026-03-09T20:21:07.043257+0000 mon.a (mon.0) 364 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: audit 2026-03-09T20:21:07.044329+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: audit 2026-03-09T20:21:07.044329+0000 mon.a (mon.0) 365 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: cephadm 2026-03-09T20:21:07.044719+0000 mgr.a (mgr.14150) 118 : cephadm [INF] Adjusting osd_memory_target on vm04 to 455.7M 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: cephadm 2026-03-09T20:21:07.044719+0000 mgr.a (mgr.14150) 118 : cephadm [INF] Adjusting osd_memory_target on vm04 to 455.7M 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: cephadm 2026-03-09T20:21:07.045117+0000 mgr.a (mgr.14150) 119 : cephadm [WRN] Unable to set osd_memory_target on vm04 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: cephadm 2026-03-09T20:21:07.045117+0000 mgr.a (mgr.14150) 119 : cephadm [WRN] Unable to set osd_memory_target on vm04 to 477918822: error parsing value: Value '477918822' is below minimum 939524096 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: audit 2026-03-09T20:21:07.045375+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: audit 2026-03-09T20:21:07.045375+0000 mon.a (mon.0) 366 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: audit 2026-03-09T20:21:07.045783+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: audit 2026-03-09T20:21:07.045783+0000 mon.a (mon.0) 367 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: audit 2026-03-09T20:21:07.049820+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: audit 2026-03-09T20:21:07.049820+0000 mon.a (mon.0) 368 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: cluster 2026-03-09T20:21:07.217761+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:08.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:08 vm03 bash[20708]: cluster 2026-03-09T20:21:07.217761+0000 mgr.a (mgr.14150) 120 : cluster [DBG] pgmap v72: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:09.580 INFO:teuthology.orchestra.run.vm08.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.c/config 2026-03-09T20:21:10.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:10 vm08 bash[23232]: cluster 2026-03-09T20:21:09.218022+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:10.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:10 vm08 bash[23232]: cluster 2026-03-09T20:21:09.218022+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:10.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:10 vm08 bash[23232]: audit 2026-03-09T20:21:09.814922+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:21:10.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:10 vm08 bash[23232]: audit 2026-03-09T20:21:09.814922+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:21:10.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:10 vm08 bash[23232]: audit 2026-03-09T20:21:09.816362+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:21:10.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:10 vm08 bash[23232]: audit 2026-03-09T20:21:09.816362+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:21:10.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:10 vm08 bash[23232]: audit 2026-03-09T20:21:09.817011+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:10.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:10 vm08 bash[23232]: audit 2026-03-09T20:21:09.817011+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:10.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:10 vm04 bash[22793]: cluster 2026-03-09T20:21:09.218022+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:10.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:10 vm04 bash[22793]: cluster 2026-03-09T20:21:09.218022+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:10.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:10 vm04 bash[22793]: audit 2026-03-09T20:21:09.814922+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:21:10.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:10 vm04 bash[22793]: audit 2026-03-09T20:21:09.814922+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:21:10.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:10 vm04 bash[22793]: audit 2026-03-09T20:21:09.816362+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:21:10.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:10 vm04 bash[22793]: audit 2026-03-09T20:21:09.816362+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:21:10.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:10 vm04 bash[22793]: audit 2026-03-09T20:21:09.817011+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:10.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:10 vm04 bash[22793]: audit 2026-03-09T20:21:09.817011+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:10.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:10 vm03 bash[20708]: cluster 2026-03-09T20:21:09.218022+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:10.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:10 vm03 bash[20708]: cluster 2026-03-09T20:21:09.218022+0000 mgr.a (mgr.14150) 121 : cluster [DBG] pgmap v73: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:10.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:10 vm03 bash[20708]: audit 2026-03-09T20:21:09.814922+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:21:10.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:10 vm03 bash[20708]: audit 2026-03-09T20:21:09.814922+0000 mon.a (mon.0) 369 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch 2026-03-09T20:21:10.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:10 vm03 bash[20708]: audit 2026-03-09T20:21:09.816362+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:21:10.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:10 vm03 bash[20708]: audit 2026-03-09T20:21:09.816362+0000 mon.a (mon.0) 370 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch 2026-03-09T20:21:10.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:10 vm03 bash[20708]: audit 2026-03-09T20:21:09.817011+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:10.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:10 vm03 bash[20708]: audit 2026-03-09T20:21:09.817011+0000 mon.a (mon.0) 371 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:11.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:11 vm04 bash[22793]: audit 2026-03-09T20:21:09.813541+0000 mgr.a (mgr.14150) 122 : audit [DBG] from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:21:11.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:11 vm04 bash[22793]: audit 2026-03-09T20:21:09.813541+0000 mgr.a (mgr.14150) 122 : audit [DBG] from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:21:11.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:11 vm03 bash[20708]: audit 2026-03-09T20:21:09.813541+0000 mgr.a (mgr.14150) 122 : audit [DBG] from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:21:11.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:11 vm03 bash[20708]: audit 2026-03-09T20:21:09.813541+0000 mgr.a (mgr.14150) 122 : audit [DBG] from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:21:11.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:11 vm08 bash[23232]: audit 2026-03-09T20:21:09.813541+0000 mgr.a (mgr.14150) 122 : audit [DBG] from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:21:11.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:11 vm08 bash[23232]: audit 2026-03-09T20:21:09.813541+0000 mgr.a (mgr.14150) 122 : audit [DBG] from='client.14259 -' entity='client.admin' cmd=[{"prefix": "orch daemon add osd", "svc_arg": "vm08:/dev/vde", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:21:12.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:12 vm04 bash[22793]: cluster 2026-03-09T20:21:11.218230+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:12.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:12 vm04 bash[22793]: cluster 2026-03-09T20:21:11.218230+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:12.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:12 vm03 bash[20708]: cluster 2026-03-09T20:21:11.218230+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:12.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:12 vm03 bash[20708]: cluster 2026-03-09T20:21:11.218230+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:12.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:12 vm08 bash[23232]: cluster 2026-03-09T20:21:11.218230+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:12.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:12 vm08 bash[23232]: cluster 2026-03-09T20:21:11.218230+0000 mgr.a (mgr.14150) 123 : cluster [DBG] pgmap v74: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:14.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:14 vm08 bash[23232]: cluster 2026-03-09T20:21:13.218451+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:14.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:14 vm08 bash[23232]: cluster 2026-03-09T20:21:13.218451+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:14.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:14 vm08 bash[23232]: audit 2026-03-09T20:21:14.130620+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.108:0/65249821' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]: dispatch 2026-03-09T20:21:14.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:14 vm08 bash[23232]: audit 2026-03-09T20:21:14.130620+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.108:0/65249821' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]: dispatch 2026-03-09T20:21:14.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:14 vm08 bash[23232]: audit 2026-03-09T20:21:14.133334+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]: dispatch 2026-03-09T20:21:14.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:14 vm08 bash[23232]: audit 2026-03-09T20:21:14.133334+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]: dispatch 2026-03-09T20:21:14.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:14 vm08 bash[23232]: audit 2026-03-09T20:21:14.136259+0000 mon.a (mon.0) 373 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]': finished 2026-03-09T20:21:14.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:14 vm08 bash[23232]: audit 2026-03-09T20:21:14.136259+0000 mon.a (mon.0) 373 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]': finished 2026-03-09T20:21:14.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:14 vm08 bash[23232]: cluster 2026-03-09T20:21:14.139046+0000 mon.a (mon.0) 374 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T20:21:14.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:14 vm08 bash[23232]: cluster 2026-03-09T20:21:14.139046+0000 mon.a (mon.0) 374 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T20:21:14.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:14 vm08 bash[23232]: audit 2026-03-09T20:21:14.139221+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:14.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:14 vm08 bash[23232]: audit 2026-03-09T20:21:14.139221+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:14.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:14 vm04 bash[22793]: cluster 2026-03-09T20:21:13.218451+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:14.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:14 vm04 bash[22793]: cluster 2026-03-09T20:21:13.218451+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:14.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:14 vm04 bash[22793]: audit 2026-03-09T20:21:14.130620+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.108:0/65249821' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]: dispatch 2026-03-09T20:21:14.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:14 vm04 bash[22793]: audit 2026-03-09T20:21:14.130620+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.108:0/65249821' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]: dispatch 2026-03-09T20:21:14.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:14 vm04 bash[22793]: audit 2026-03-09T20:21:14.133334+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]: dispatch 2026-03-09T20:21:14.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:14 vm04 bash[22793]: audit 2026-03-09T20:21:14.133334+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]: dispatch 2026-03-09T20:21:14.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:14 vm04 bash[22793]: audit 2026-03-09T20:21:14.136259+0000 mon.a (mon.0) 373 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]': finished 2026-03-09T20:21:14.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:14 vm04 bash[22793]: audit 2026-03-09T20:21:14.136259+0000 mon.a (mon.0) 373 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]': finished 2026-03-09T20:21:14.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:14 vm04 bash[22793]: cluster 2026-03-09T20:21:14.139046+0000 mon.a (mon.0) 374 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T20:21:14.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:14 vm04 bash[22793]: cluster 2026-03-09T20:21:14.139046+0000 mon.a (mon.0) 374 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T20:21:14.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:14 vm04 bash[22793]: audit 2026-03-09T20:21:14.139221+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:14.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:14 vm04 bash[22793]: audit 2026-03-09T20:21:14.139221+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:14.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:14 vm03 bash[20708]: cluster 2026-03-09T20:21:13.218451+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:14.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:14 vm03 bash[20708]: cluster 2026-03-09T20:21:13.218451+0000 mgr.a (mgr.14150) 124 : cluster [DBG] pgmap v75: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:14.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:14 vm03 bash[20708]: audit 2026-03-09T20:21:14.130620+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.108:0/65249821' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]: dispatch 2026-03-09T20:21:14.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:14 vm03 bash[20708]: audit 2026-03-09T20:21:14.130620+0000 mon.b (mon.2) 4 : audit [INF] from='client.? 192.168.123.108:0/65249821' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]: dispatch 2026-03-09T20:21:14.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:14 vm03 bash[20708]: audit 2026-03-09T20:21:14.133334+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]: dispatch 2026-03-09T20:21:14.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:14 vm03 bash[20708]: audit 2026-03-09T20:21:14.133334+0000 mon.a (mon.0) 372 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]: dispatch 2026-03-09T20:21:14.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:14 vm03 bash[20708]: audit 2026-03-09T20:21:14.136259+0000 mon.a (mon.0) 373 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]': finished 2026-03-09T20:21:14.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:14 vm03 bash[20708]: audit 2026-03-09T20:21:14.136259+0000 mon.a (mon.0) 373 : audit [INF] from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e82b252d-637a-40ce-858c-11bb0bf30bdc"}]': finished 2026-03-09T20:21:14.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:14 vm03 bash[20708]: cluster 2026-03-09T20:21:14.139046+0000 mon.a (mon.0) 374 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T20:21:14.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:14 vm03 bash[20708]: cluster 2026-03-09T20:21:14.139046+0000 mon.a (mon.0) 374 : cluster [DBG] osdmap e15: 3 total, 2 up, 3 in 2026-03-09T20:21:14.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:14 vm03 bash[20708]: audit 2026-03-09T20:21:14.139221+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:14.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:14 vm03 bash[20708]: audit 2026-03-09T20:21:14.139221+0000 mon.a (mon.0) 375 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:15.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:15 vm04 bash[22793]: audit 2026-03-09T20:21:14.703973+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.108:0/1564984929' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:21:15.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:15 vm04 bash[22793]: audit 2026-03-09T20:21:14.703973+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.108:0/1564984929' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:21:15.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:15 vm03 bash[20708]: audit 2026-03-09T20:21:14.703973+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.108:0/1564984929' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:21:15.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:15 vm03 bash[20708]: audit 2026-03-09T20:21:14.703973+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.108:0/1564984929' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:21:15.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:15 vm08 bash[23232]: audit 2026-03-09T20:21:14.703973+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.108:0/1564984929' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:21:15.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:15 vm08 bash[23232]: audit 2026-03-09T20:21:14.703973+0000 mon.c (mon.1) 9 : audit [DBG] from='client.? 192.168.123.108:0/1564984929' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch 2026-03-09T20:21:16.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:16 vm04 bash[22793]: cluster 2026-03-09T20:21:15.218606+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:16.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:16 vm04 bash[22793]: cluster 2026-03-09T20:21:15.218606+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:16.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:16 vm03 bash[20708]: cluster 2026-03-09T20:21:15.218606+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:16.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:16 vm03 bash[20708]: cluster 2026-03-09T20:21:15.218606+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:16.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:16 vm08 bash[23232]: cluster 2026-03-09T20:21:15.218606+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:16.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:16 vm08 bash[23232]: cluster 2026-03-09T20:21:15.218606+0000 mgr.a (mgr.14150) 125 : cluster [DBG] pgmap v77: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:18.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:18 vm08 bash[23232]: cluster 2026-03-09T20:21:17.218844+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:18.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:18 vm08 bash[23232]: cluster 2026-03-09T20:21:17.218844+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:18.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:18 vm04 bash[22793]: cluster 2026-03-09T20:21:17.218844+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:18.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:18 vm04 bash[22793]: cluster 2026-03-09T20:21:17.218844+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:18.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:18 vm03 bash[20708]: cluster 2026-03-09T20:21:17.218844+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:18.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:18 vm03 bash[20708]: cluster 2026-03-09T20:21:17.218844+0000 mgr.a (mgr.14150) 126 : cluster [DBG] pgmap v78: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:20.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:20 vm04 bash[22793]: cluster 2026-03-09T20:21:19.219114+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:20.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:20 vm04 bash[22793]: cluster 2026-03-09T20:21:19.219114+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:20.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:20 vm03 bash[20708]: cluster 2026-03-09T20:21:19.219114+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:20.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:20 vm03 bash[20708]: cluster 2026-03-09T20:21:19.219114+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:20.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:20 vm08 bash[23232]: cluster 2026-03-09T20:21:19.219114+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:20.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:20 vm08 bash[23232]: cluster 2026-03-09T20:21:19.219114+0000 mgr.a (mgr.14150) 127 : cluster [DBG] pgmap v79: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:22 vm04 bash[22793]: cluster 2026-03-09T20:21:21.219344+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:22 vm04 bash[22793]: cluster 2026-03-09T20:21:21.219344+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:22.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:22 vm03 bash[20708]: cluster 2026-03-09T20:21:21.219344+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:22.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:22 vm03 bash[20708]: cluster 2026-03-09T20:21:21.219344+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:22 vm08 bash[23232]: cluster 2026-03-09T20:21:21.219344+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:22 vm08 bash[23232]: cluster 2026-03-09T20:21:21.219344+0000 mgr.a (mgr.14150) 128 : cluster [DBG] pgmap v80: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:23.510 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:23 vm08 bash[23232]: audit 2026-03-09T20:21:23.280739+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:21:23.510 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:23 vm08 bash[23232]: audit 2026-03-09T20:21:23.280739+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:21:23.510 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:23 vm08 bash[23232]: audit 2026-03-09T20:21:23.281314+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:23.510 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:23 vm08 bash[23232]: audit 2026-03-09T20:21:23.281314+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:23 vm04 bash[22793]: audit 2026-03-09T20:21:23.280739+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:21:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:23 vm04 bash[22793]: audit 2026-03-09T20:21:23.280739+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:21:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:23 vm04 bash[22793]: audit 2026-03-09T20:21:23.281314+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:23 vm04 bash[22793]: audit 2026-03-09T20:21:23.281314+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:23.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:23 vm03 bash[20708]: audit 2026-03-09T20:21:23.280739+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:21:23.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:23 vm03 bash[20708]: audit 2026-03-09T20:21:23.280739+0000 mon.a (mon.0) 376 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch 2026-03-09T20:21:23.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:23 vm03 bash[20708]: audit 2026-03-09T20:21:23.281314+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:23.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:23 vm03 bash[20708]: audit 2026-03-09T20:21:23.281314+0000 mon.a (mon.0) 377 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:24.054 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:24 vm08 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:21:24.307 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:24 vm08 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:21:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:24 vm04 bash[22793]: cluster 2026-03-09T20:21:23.219608+0000 mgr.a (mgr.14150) 129 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:24.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:24 vm04 bash[22793]: cluster 2026-03-09T20:21:23.219608+0000 mgr.a (mgr.14150) 129 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:24.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:24 vm04 bash[22793]: cephadm 2026-03-09T20:21:23.281764+0000 mgr.a (mgr.14150) 130 : cephadm [INF] Deploying daemon osd.2 on vm08 2026-03-09T20:21:24.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:24 vm04 bash[22793]: cephadm 2026-03-09T20:21:23.281764+0000 mgr.a (mgr.14150) 130 : cephadm [INF] Deploying daemon osd.2 on vm08 2026-03-09T20:21:24.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:24 vm04 bash[22793]: audit 2026-03-09T20:21:24.265088+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:24.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:24 vm04 bash[22793]: audit 2026-03-09T20:21:24.265088+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:24.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:24 vm04 bash[22793]: audit 2026-03-09T20:21:24.270504+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:24.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:24 vm04 bash[22793]: audit 2026-03-09T20:21:24.270504+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:24.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:24 vm04 bash[22793]: audit 2026-03-09T20:21:24.274247+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:24.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:24 vm04 bash[22793]: audit 2026-03-09T20:21:24.274247+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:24 vm03 bash[20708]: cluster 2026-03-09T20:21:23.219608+0000 mgr.a (mgr.14150) 129 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:24 vm03 bash[20708]: cluster 2026-03-09T20:21:23.219608+0000 mgr.a (mgr.14150) 129 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:24 vm03 bash[20708]: cephadm 2026-03-09T20:21:23.281764+0000 mgr.a (mgr.14150) 130 : cephadm [INF] Deploying daemon osd.2 on vm08 2026-03-09T20:21:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:24 vm03 bash[20708]: cephadm 2026-03-09T20:21:23.281764+0000 mgr.a (mgr.14150) 130 : cephadm [INF] Deploying daemon osd.2 on vm08 2026-03-09T20:21:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:24 vm03 bash[20708]: audit 2026-03-09T20:21:24.265088+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:24 vm03 bash[20708]: audit 2026-03-09T20:21:24.265088+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:24 vm03 bash[20708]: audit 2026-03-09T20:21:24.270504+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:24 vm03 bash[20708]: audit 2026-03-09T20:21:24.270504+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:24 vm03 bash[20708]: audit 2026-03-09T20:21:24.274247+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:24 vm03 bash[20708]: audit 2026-03-09T20:21:24.274247+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:24.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:24 vm08 bash[23232]: cluster 2026-03-09T20:21:23.219608+0000 mgr.a (mgr.14150) 129 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:24.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:24 vm08 bash[23232]: cluster 2026-03-09T20:21:23.219608+0000 mgr.a (mgr.14150) 129 : cluster [DBG] pgmap v81: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:24.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:24 vm08 bash[23232]: cephadm 2026-03-09T20:21:23.281764+0000 mgr.a (mgr.14150) 130 : cephadm [INF] Deploying daemon osd.2 on vm08 2026-03-09T20:21:24.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:24 vm08 bash[23232]: cephadm 2026-03-09T20:21:23.281764+0000 mgr.a (mgr.14150) 130 : cephadm [INF] Deploying daemon osd.2 on vm08 2026-03-09T20:21:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:24 vm08 bash[23232]: audit 2026-03-09T20:21:24.265088+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:24 vm08 bash[23232]: audit 2026-03-09T20:21:24.265088+0000 mon.a (mon.0) 378 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:24 vm08 bash[23232]: audit 2026-03-09T20:21:24.270504+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:24 vm08 bash[23232]: audit 2026-03-09T20:21:24.270504+0000 mon.a (mon.0) 379 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:24 vm08 bash[23232]: audit 2026-03-09T20:21:24.274247+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:24 vm08 bash[23232]: audit 2026-03-09T20:21:24.274247+0000 mon.a (mon.0) 380 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:25.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:25 vm03 bash[20708]: cluster 2026-03-09T20:21:25.219940+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:25.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:25 vm03 bash[20708]: cluster 2026-03-09T20:21:25.219940+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:25.960 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:25 vm08 bash[23232]: cluster 2026-03-09T20:21:25.219940+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:25.960 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:25 vm08 bash[23232]: cluster 2026-03-09T20:21:25.219940+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:26.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:25 vm04 bash[22793]: cluster 2026-03-09T20:21:25.219940+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:26.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:25 vm04 bash[22793]: cluster 2026-03-09T20:21:25.219940+0000 mgr.a (mgr.14150) 131 : cluster [DBG] pgmap v82: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:28.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:28 vm08 bash[23232]: cluster 2026-03-09T20:21:27.220166+0000 mgr.a (mgr.14150) 132 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:28.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:28 vm08 bash[23232]: cluster 2026-03-09T20:21:27.220166+0000 mgr.a (mgr.14150) 132 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:28.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:28 vm08 bash[23232]: audit 2026-03-09T20:21:27.605891+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:21:28.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:28 vm08 bash[23232]: audit 2026-03-09T20:21:27.605891+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:21:28.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:28 vm08 bash[23232]: audit 2026-03-09T20:21:27.607555+0000 mon.c (mon.1) 10 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:21:28.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:28 vm08 bash[23232]: audit 2026-03-09T20:21:27.607555+0000 mon.c (mon.1) 10 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:21:28.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:28 vm04 bash[22793]: cluster 2026-03-09T20:21:27.220166+0000 mgr.a (mgr.14150) 132 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:28.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:28 vm04 bash[22793]: cluster 2026-03-09T20:21:27.220166+0000 mgr.a (mgr.14150) 132 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:28.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:28 vm04 bash[22793]: audit 2026-03-09T20:21:27.605891+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:21:28.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:28 vm04 bash[22793]: audit 2026-03-09T20:21:27.605891+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:21:28.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:28 vm04 bash[22793]: audit 2026-03-09T20:21:27.607555+0000 mon.c (mon.1) 10 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:21:28.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:28 vm04 bash[22793]: audit 2026-03-09T20:21:27.607555+0000 mon.c (mon.1) 10 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:21:28.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:28 vm03 bash[20708]: cluster 2026-03-09T20:21:27.220166+0000 mgr.a (mgr.14150) 132 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:28 vm03 bash[20708]: cluster 2026-03-09T20:21:27.220166+0000 mgr.a (mgr.14150) 132 : cluster [DBG] pgmap v83: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:28 vm03 bash[20708]: audit 2026-03-09T20:21:27.605891+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:21:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:28 vm03 bash[20708]: audit 2026-03-09T20:21:27.605891+0000 mon.a (mon.0) 381 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:21:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:28 vm03 bash[20708]: audit 2026-03-09T20:21:27.607555+0000 mon.c (mon.1) 10 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:21:28.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:28 vm03 bash[20708]: audit 2026-03-09T20:21:27.607555+0000 mon.c (mon.1) 10 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch 2026-03-09T20:21:29.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:29 vm08 bash[23232]: audit 2026-03-09T20:21:28.281246+0000 mon.a (mon.0) 382 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T20:21:29.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:29 vm08 bash[23232]: audit 2026-03-09T20:21:28.281246+0000 mon.a (mon.0) 382 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T20:21:29.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:29 vm08 bash[23232]: cluster 2026-03-09T20:21:28.284013+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T20:21:29.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:29 vm08 bash[23232]: cluster 2026-03-09T20:21:28.284013+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T20:21:29.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:29 vm08 bash[23232]: audit 2026-03-09T20:21:28.285340+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:29.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:29 vm08 bash[23232]: audit 2026-03-09T20:21:28.285340+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:29.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:29 vm08 bash[23232]: audit 2026-03-09T20:21:28.285440+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T20:21:29.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:29 vm08 bash[23232]: audit 2026-03-09T20:21:28.285440+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T20:21:29.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:29 vm08 bash[23232]: audit 2026-03-09T20:21:28.285968+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T20:21:29.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:29 vm08 bash[23232]: audit 2026-03-09T20:21:28.285968+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T20:21:29.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:29 vm08 bash[23232]: audit 2026-03-09T20:21:29.284311+0000 mon.a (mon.0) 386 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T20:21:29.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:29 vm08 bash[23232]: audit 2026-03-09T20:21:29.284311+0000 mon.a (mon.0) 386 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T20:21:29.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:29 vm08 bash[23232]: cluster 2026-03-09T20:21:29.286811+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T20:21:29.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:29 vm08 bash[23232]: cluster 2026-03-09T20:21:29.286811+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T20:21:29.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:29 vm04 bash[22793]: audit 2026-03-09T20:21:28.281246+0000 mon.a (mon.0) 382 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T20:21:29.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:29 vm04 bash[22793]: audit 2026-03-09T20:21:28.281246+0000 mon.a (mon.0) 382 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T20:21:29.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:29 vm04 bash[22793]: cluster 2026-03-09T20:21:28.284013+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T20:21:29.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:29 vm04 bash[22793]: cluster 2026-03-09T20:21:28.284013+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T20:21:29.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:29 vm04 bash[22793]: audit 2026-03-09T20:21:28.285340+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:29.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:29 vm04 bash[22793]: audit 2026-03-09T20:21:28.285340+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:29.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:29 vm04 bash[22793]: audit 2026-03-09T20:21:28.285440+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T20:21:29.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:29 vm04 bash[22793]: audit 2026-03-09T20:21:28.285440+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T20:21:29.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:29 vm04 bash[22793]: audit 2026-03-09T20:21:28.285968+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T20:21:29.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:29 vm04 bash[22793]: audit 2026-03-09T20:21:28.285968+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T20:21:29.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:29 vm04 bash[22793]: audit 2026-03-09T20:21:29.284311+0000 mon.a (mon.0) 386 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T20:21:29.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:29 vm04 bash[22793]: audit 2026-03-09T20:21:29.284311+0000 mon.a (mon.0) 386 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T20:21:29.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:29 vm04 bash[22793]: cluster 2026-03-09T20:21:29.286811+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T20:21:29.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:29 vm04 bash[22793]: cluster 2026-03-09T20:21:29.286811+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T20:21:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:29 vm03 bash[20708]: audit 2026-03-09T20:21:28.281246+0000 mon.a (mon.0) 382 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T20:21:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:29 vm03 bash[20708]: audit 2026-03-09T20:21:28.281246+0000 mon.a (mon.0) 382 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished 2026-03-09T20:21:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:29 vm03 bash[20708]: cluster 2026-03-09T20:21:28.284013+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T20:21:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:29 vm03 bash[20708]: cluster 2026-03-09T20:21:28.284013+0000 mon.a (mon.0) 383 : cluster [DBG] osdmap e16: 3 total, 2 up, 3 in 2026-03-09T20:21:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:29 vm03 bash[20708]: audit 2026-03-09T20:21:28.285340+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:29 vm03 bash[20708]: audit 2026-03-09T20:21:28.285340+0000 mon.a (mon.0) 384 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:29 vm03 bash[20708]: audit 2026-03-09T20:21:28.285440+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T20:21:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:29 vm03 bash[20708]: audit 2026-03-09T20:21:28.285440+0000 mon.a (mon.0) 385 : audit [INF] from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T20:21:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:29 vm03 bash[20708]: audit 2026-03-09T20:21:28.285968+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T20:21:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:29 vm03 bash[20708]: audit 2026-03-09T20:21:28.285968+0000 mon.c (mon.1) 11 : audit [INF] from='osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]: dispatch 2026-03-09T20:21:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:29 vm03 bash[20708]: audit 2026-03-09T20:21:29.284311+0000 mon.a (mon.0) 386 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T20:21:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:29 vm03 bash[20708]: audit 2026-03-09T20:21:29.284311+0000 mon.a (mon.0) 386 : audit [INF] from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=vm08", "root=default"]}]': finished 2026-03-09T20:21:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:29 vm03 bash[20708]: cluster 2026-03-09T20:21:29.286811+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T20:21:29.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:29 vm03 bash[20708]: cluster 2026-03-09T20:21:29.286811+0000 mon.a (mon.0) 387 : cluster [DBG] osdmap e17: 3 total, 2 up, 3 in 2026-03-09T20:21:30.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:30 vm08 bash[23232]: cluster 2026-03-09T20:21:29.220430+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:30.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:30 vm08 bash[23232]: cluster 2026-03-09T20:21:29.220430+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:30.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:30 vm08 bash[23232]: audit 2026-03-09T20:21:29.287054+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:30 vm08 bash[23232]: audit 2026-03-09T20:21:29.287054+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:30 vm08 bash[23232]: audit 2026-03-09T20:21:29.291646+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:30 vm08 bash[23232]: audit 2026-03-09T20:21:29.291646+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:30 vm08 bash[23232]: audit 2026-03-09T20:21:30.131275+0000 mon.a (mon.0) 390 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T20:21:30.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:30 vm08 bash[23232]: audit 2026-03-09T20:21:30.131275+0000 mon.a (mon.0) 390 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T20:21:30.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:30 vm08 bash[23232]: audit 2026-03-09T20:21:30.290669+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:30 vm08 bash[23232]: audit 2026-03-09T20:21:30.290669+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:30 vm04 bash[22793]: cluster 2026-03-09T20:21:29.220430+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:30.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:30 vm04 bash[22793]: cluster 2026-03-09T20:21:29.220430+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:30.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:30 vm04 bash[22793]: audit 2026-03-09T20:21:29.287054+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:30 vm04 bash[22793]: audit 2026-03-09T20:21:29.287054+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:30 vm04 bash[22793]: audit 2026-03-09T20:21:29.291646+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:30 vm04 bash[22793]: audit 2026-03-09T20:21:29.291646+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:30 vm04 bash[22793]: audit 2026-03-09T20:21:30.131275+0000 mon.a (mon.0) 390 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T20:21:30.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:30 vm04 bash[22793]: audit 2026-03-09T20:21:30.131275+0000 mon.a (mon.0) 390 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T20:21:30.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:30 vm04 bash[22793]: audit 2026-03-09T20:21:30.290669+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:30 vm04 bash[22793]: audit 2026-03-09T20:21:30.290669+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:30 vm03 bash[20708]: cluster 2026-03-09T20:21:29.220430+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:30 vm03 bash[20708]: cluster 2026-03-09T20:21:29.220430+0000 mgr.a (mgr.14150) 133 : cluster [DBG] pgmap v85: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:30 vm03 bash[20708]: audit 2026-03-09T20:21:29.287054+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:30 vm03 bash[20708]: audit 2026-03-09T20:21:29.287054+0000 mon.a (mon.0) 388 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:30 vm03 bash[20708]: audit 2026-03-09T20:21:29.291646+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:30 vm03 bash[20708]: audit 2026-03-09T20:21:29.291646+0000 mon.a (mon.0) 389 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:30 vm03 bash[20708]: audit 2026-03-09T20:21:30.131275+0000 mon.a (mon.0) 390 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T20:21:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:30 vm03 bash[20708]: audit 2026-03-09T20:21:30.131275+0000 mon.a (mon.0) 390 : audit [INF] from='osd.2 ' entity='osd.2' 2026-03-09T20:21:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:30 vm03 bash[20708]: audit 2026-03-09T20:21:30.290669+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:30 vm03 bash[20708]: audit 2026-03-09T20:21:30.290669+0000 mon.a (mon.0) 391 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:31.357 INFO:teuthology.orchestra.run.vm08.stdout:Created osd(s) 2 on host 'vm08' 2026-03-09T20:21:31.427 DEBUG:teuthology.orchestra.run.vm08:osd.2> sudo journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@osd.2.service 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: cluster 2026-03-09T20:21:28.599806+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: cluster 2026-03-09T20:21:28.599806+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: cluster 2026-03-09T20:21:28.599856+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: cluster 2026-03-09T20:21:28.599856+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: audit 2026-03-09T20:21:30.324887+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: audit 2026-03-09T20:21:30.324887+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: audit 2026-03-09T20:21:30.328813+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: audit 2026-03-09T20:21:30.328813+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: audit 2026-03-09T20:21:30.684963+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: audit 2026-03-09T20:21:30.684963+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: audit 2026-03-09T20:21:30.685477+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: audit 2026-03-09T20:21:30.685477+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: audit 2026-03-09T20:21:30.690203+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: audit 2026-03-09T20:21:30.690203+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: cluster 2026-03-09T20:21:31.135130+0000 mon.a (mon.0) 397 : cluster [INF] osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608] boot 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: cluster 2026-03-09T20:21:31.135130+0000 mon.a (mon.0) 397 : cluster [INF] osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608] boot 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: cluster 2026-03-09T20:21:31.135162+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: cluster 2026-03-09T20:21:31.135162+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: audit 2026-03-09T20:21:31.135383+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: audit 2026-03-09T20:21:31.135383+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: audit 2026-03-09T20:21:31.252811+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:31.428 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:31 vm08 bash[23232]: audit 2026-03-09T20:21:31.252811+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:31.429 INFO:tasks.cephadm:Waiting for 3 OSDs to come up... 2026-03-09T20:21:31.429 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph osd stat -f json 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: cluster 2026-03-09T20:21:28.599806+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: cluster 2026-03-09T20:21:28.599806+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: cluster 2026-03-09T20:21:28.599856+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: cluster 2026-03-09T20:21:28.599856+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: audit 2026-03-09T20:21:30.324887+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: audit 2026-03-09T20:21:30.324887+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: audit 2026-03-09T20:21:30.328813+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: audit 2026-03-09T20:21:30.328813+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: audit 2026-03-09T20:21:30.684963+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: audit 2026-03-09T20:21:30.684963+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: audit 2026-03-09T20:21:30.685477+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: audit 2026-03-09T20:21:30.685477+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: audit 2026-03-09T20:21:30.690203+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: audit 2026-03-09T20:21:30.690203+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: cluster 2026-03-09T20:21:31.135130+0000 mon.a (mon.0) 397 : cluster [INF] osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608] boot 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: cluster 2026-03-09T20:21:31.135130+0000 mon.a (mon.0) 397 : cluster [INF] osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608] boot 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: cluster 2026-03-09T20:21:31.135162+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: cluster 2026-03-09T20:21:31.135162+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: audit 2026-03-09T20:21:31.135383+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: audit 2026-03-09T20:21:31.135383+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: audit 2026-03-09T20:21:31.252811+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:31.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:31 vm04 bash[22793]: audit 2026-03-09T20:21:31.252811+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: cluster 2026-03-09T20:21:28.599806+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: cluster 2026-03-09T20:21:28.599806+0000 osd.2 (osd.2) 1 : cluster [DBG] purged_snaps scrub starts 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: cluster 2026-03-09T20:21:28.599856+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: cluster 2026-03-09T20:21:28.599856+0000 osd.2 (osd.2) 2 : cluster [DBG] purged_snaps scrub ok 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: audit 2026-03-09T20:21:30.324887+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: audit 2026-03-09T20:21:30.324887+0000 mon.a (mon.0) 392 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: audit 2026-03-09T20:21:30.328813+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: audit 2026-03-09T20:21:30.328813+0000 mon.a (mon.0) 393 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: audit 2026-03-09T20:21:30.684963+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: audit 2026-03-09T20:21:30.684963+0000 mon.a (mon.0) 394 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: audit 2026-03-09T20:21:30.685477+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: audit 2026-03-09T20:21:30.685477+0000 mon.a (mon.0) 395 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: audit 2026-03-09T20:21:30.690203+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: audit 2026-03-09T20:21:30.690203+0000 mon.a (mon.0) 396 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: cluster 2026-03-09T20:21:31.135130+0000 mon.a (mon.0) 397 : cluster [INF] osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608] boot 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: cluster 2026-03-09T20:21:31.135130+0000 mon.a (mon.0) 397 : cluster [INF] osd.2 [v2:192.168.123.108:6800/2180466608,v1:192.168.123.108:6801/2180466608] boot 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: cluster 2026-03-09T20:21:31.135162+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: cluster 2026-03-09T20:21:31.135162+0000 mon.a (mon.0) 398 : cluster [DBG] osdmap e18: 3 total, 3 up, 3 in 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: audit 2026-03-09T20:21:31.135383+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: audit 2026-03-09T20:21:31.135383+0000 mon.a (mon.0) 399 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: audit 2026-03-09T20:21:31.252811+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:31.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:31 vm03 bash[20708]: audit 2026-03-09T20:21:31.252811+0000 mon.a (mon.0) 400 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:32.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:32 vm04 bash[22793]: cluster 2026-03-09T20:21:31.220624+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:32.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:32 vm04 bash[22793]: cluster 2026-03-09T20:21:31.220624+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:32.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:32 vm04 bash[22793]: audit 2026-03-09T20:21:31.336858+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:32.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:32 vm04 bash[22793]: audit 2026-03-09T20:21:31.336858+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:32.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:32 vm04 bash[22793]: audit 2026-03-09T20:21:31.351627+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:32.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:32 vm04 bash[22793]: audit 2026-03-09T20:21:31.351627+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:32.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:32 vm04 bash[22793]: audit 2026-03-09T20:21:31.355541+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:32.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:32 vm04 bash[22793]: audit 2026-03-09T20:21:31.355541+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:32.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:32 vm04 bash[22793]: audit 2026-03-09T20:21:32.139733+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:32.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:32 vm04 bash[22793]: audit 2026-03-09T20:21:32.139733+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:32.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:32 vm04 bash[22793]: cluster 2026-03-09T20:21:32.141929+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T20:21:32.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:32 vm04 bash[22793]: cluster 2026-03-09T20:21:32.141929+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T20:21:32.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:32 vm04 bash[22793]: audit 2026-03-09T20:21:32.142507+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:32.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:32 vm04 bash[22793]: audit 2026-03-09T20:21:32.142507+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:32 vm03 bash[20708]: cluster 2026-03-09T20:21:31.220624+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:32 vm03 bash[20708]: cluster 2026-03-09T20:21:31.220624+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:32 vm03 bash[20708]: audit 2026-03-09T20:21:31.336858+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:32 vm03 bash[20708]: audit 2026-03-09T20:21:31.336858+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:32 vm03 bash[20708]: audit 2026-03-09T20:21:31.351627+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:32 vm03 bash[20708]: audit 2026-03-09T20:21:31.351627+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:32 vm03 bash[20708]: audit 2026-03-09T20:21:31.355541+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:32 vm03 bash[20708]: audit 2026-03-09T20:21:31.355541+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:32 vm03 bash[20708]: audit 2026-03-09T20:21:32.139733+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:32 vm03 bash[20708]: audit 2026-03-09T20:21:32.139733+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:32 vm03 bash[20708]: cluster 2026-03-09T20:21:32.141929+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T20:21:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:32 vm03 bash[20708]: cluster 2026-03-09T20:21:32.141929+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T20:21:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:32 vm03 bash[20708]: audit 2026-03-09T20:21:32.142507+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:32.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:32 vm03 bash[20708]: audit 2026-03-09T20:21:32.142507+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:32 vm08 bash[23232]: cluster 2026-03-09T20:21:31.220624+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:32 vm08 bash[23232]: cluster 2026-03-09T20:21:31.220624+0000 mgr.a (mgr.14150) 134 : cluster [DBG] pgmap v88: 0 pgs: ; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail 2026-03-09T20:21:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:32 vm08 bash[23232]: audit 2026-03-09T20:21:31.336858+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:32 vm08 bash[23232]: audit 2026-03-09T20:21:31.336858+0000 mon.a (mon.0) 401 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:21:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:32 vm08 bash[23232]: audit 2026-03-09T20:21:31.351627+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:32 vm08 bash[23232]: audit 2026-03-09T20:21:31.351627+0000 mon.a (mon.0) 402 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:32 vm08 bash[23232]: audit 2026-03-09T20:21:31.355541+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:32 vm08 bash[23232]: audit 2026-03-09T20:21:31.355541+0000 mon.a (mon.0) 403 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:32 vm08 bash[23232]: audit 2026-03-09T20:21:32.139733+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:32 vm08 bash[23232]: audit 2026-03-09T20:21:32.139733+0000 mon.a (mon.0) 404 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:32 vm08 bash[23232]: cluster 2026-03-09T20:21:32.141929+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T20:21:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:32 vm08 bash[23232]: cluster 2026-03-09T20:21:32.141929+0000 mon.a (mon.0) 405 : cluster [DBG] osdmap e19: 3 total, 3 up, 3 in 2026-03-09T20:21:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:32 vm08 bash[23232]: audit 2026-03-09T20:21:32.142507+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:32 vm08 bash[23232]: audit 2026-03-09T20:21:32.142507+0000 mon.a (mon.0) 406 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.143178+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.143178+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: cluster 2026-03-09T20:21:33.145792+0000 mon.a (mon.0) 408 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: cluster 2026-03-09T20:21:33.145792+0000 mon.a (mon.0) 408 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: cluster 2026-03-09T20:21:33.220868+0000 mgr.a (mgr.14150) 135 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: cluster 2026-03-09T20:21:33.220868+0000 mgr.a (mgr.14150) 135 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.235036+0000 mon.a (mon.0) 409 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.235036+0000 mon.a (mon.0) 409 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.251309+0000 mon.a (mon.0) 410 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.251309+0000 mon.a (mon.0) 410 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.251563+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.251563+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.251701+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.251701+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.251781+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.251781+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.253376+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.253376+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.253438+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.253438+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.253489+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.253489+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.255032+0000 mon.c (mon.1) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.255032+0000 mon.c (mon.1) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.268526+0000 mon.b (mon.2) 5 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.268526+0000 mon.b (mon.2) 5 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.270754+0000 mon.c (mon.1) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.270754+0000 mon.c (mon.1) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.271083+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.271083+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.271157+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.271157+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.271209+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.271209+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.284611+0000 mon.b (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:34 vm08 bash[23232]: audit 2026-03-09T20:21:33.284611+0000 mon.b (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.143178+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.143178+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: cluster 2026-03-09T20:21:33.145792+0000 mon.a (mon.0) 408 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: cluster 2026-03-09T20:21:33.145792+0000 mon.a (mon.0) 408 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: cluster 2026-03-09T20:21:33.220868+0000 mgr.a (mgr.14150) 135 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: cluster 2026-03-09T20:21:33.220868+0000 mgr.a (mgr.14150) 135 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.235036+0000 mon.a (mon.0) 409 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.235036+0000 mon.a (mon.0) 409 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.251309+0000 mon.a (mon.0) 410 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.251309+0000 mon.a (mon.0) 410 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.251563+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.251563+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.251701+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.251701+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.251781+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.251781+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.253376+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.253376+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.253438+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.253438+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.253489+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.253489+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.255032+0000 mon.c (mon.1) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.255032+0000 mon.c (mon.1) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.268526+0000 mon.b (mon.2) 5 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.268526+0000 mon.b (mon.2) 5 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.270754+0000 mon.c (mon.1) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.270754+0000 mon.c (mon.1) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.271083+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.271083+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.271157+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.271157+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.271209+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.271209+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.284611+0000 mon.b (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:34 vm04 bash[22793]: audit 2026-03-09T20:21:33.284611+0000 mon.b (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.143178+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.143178+0000 mon.a (mon.0) 407 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: cluster 2026-03-09T20:21:33.145792+0000 mon.a (mon.0) 408 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: cluster 2026-03-09T20:21:33.145792+0000 mon.a (mon.0) 408 : cluster [DBG] osdmap e20: 3 total, 3 up, 3 in 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: cluster 2026-03-09T20:21:33.220868+0000 mgr.a (mgr.14150) 135 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: cluster 2026-03-09T20:21:33.220868+0000 mgr.a (mgr.14150) 135 : cluster [DBG] pgmap v91: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.235036+0000 mon.a (mon.0) 409 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.235036+0000 mon.a (mon.0) 409 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.251309+0000 mon.a (mon.0) 410 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.251309+0000 mon.a (mon.0) 410 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.251563+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.251563+0000 mon.a (mon.0) 411 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.251701+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.251701+0000 mon.a (mon.0) 412 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.251781+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.251781+0000 mon.a (mon.0) 413 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.253376+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.253376+0000 mon.a (mon.0) 414 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.253438+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.253438+0000 mon.a (mon.0) 415 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.253489+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.253489+0000 mon.a (mon.0) 416 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.255032+0000 mon.c (mon.1) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.255032+0000 mon.c (mon.1) 12 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.268526+0000 mon.b (mon.2) 5 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.268526+0000 mon.b (mon.2) 5 : audit [INF] from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.270754+0000 mon.c (mon.1) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.270754+0000 mon.c (mon.1) 13 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.271083+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.271083+0000 mon.a (mon.0) 417 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.271157+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.271157+0000 mon.a (mon.0) 418 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.271209+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.271209+0000 mon.a (mon.0) 419 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.284611+0000 mon.b (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:34.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:34 vm03 bash[20708]: audit 2026-03-09T20:21:33.284611+0000 mon.b (mon.2) 6 : audit [INF] from='admin socket' entity='admin socket' cmd=smart args=[json]: finished 2026-03-09T20:21:35.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:35 vm08 bash[23232]: cluster 2026-03-09T20:21:34.174342+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T20:21:35.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:35 vm08 bash[23232]: cluster 2026-03-09T20:21:34.174342+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T20:21:35.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:35 vm08 bash[23232]: cluster 2026-03-09T20:21:34.174637+0000 mon.a (mon.0) 421 : cluster [DBG] mgrmap e14: a(active, since 2m), standbys: b 2026-03-09T20:21:35.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:35 vm08 bash[23232]: cluster 2026-03-09T20:21:34.174637+0000 mon.a (mon.0) 421 : cluster [DBG] mgrmap e14: a(active, since 2m), standbys: b 2026-03-09T20:21:35.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:35 vm04 bash[22793]: cluster 2026-03-09T20:21:34.174342+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T20:21:35.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:35 vm04 bash[22793]: cluster 2026-03-09T20:21:34.174342+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T20:21:35.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:35 vm04 bash[22793]: cluster 2026-03-09T20:21:34.174637+0000 mon.a (mon.0) 421 : cluster [DBG] mgrmap e14: a(active, since 2m), standbys: b 2026-03-09T20:21:35.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:35 vm04 bash[22793]: cluster 2026-03-09T20:21:34.174637+0000 mon.a (mon.0) 421 : cluster [DBG] mgrmap e14: a(active, since 2m), standbys: b 2026-03-09T20:21:35.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:35 vm03 bash[20708]: cluster 2026-03-09T20:21:34.174342+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T20:21:35.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:35 vm03 bash[20708]: cluster 2026-03-09T20:21:34.174342+0000 mon.a (mon.0) 420 : cluster [DBG] osdmap e21: 3 total, 3 up, 3 in 2026-03-09T20:21:35.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:35 vm03 bash[20708]: cluster 2026-03-09T20:21:34.174637+0000 mon.a (mon.0) 421 : cluster [DBG] mgrmap e14: a(active, since 2m), standbys: b 2026-03-09T20:21:35.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:35 vm03 bash[20708]: cluster 2026-03-09T20:21:34.174637+0000 mon.a (mon.0) 421 : cluster [DBG] mgrmap e14: a(active, since 2m), standbys: b 2026-03-09T20:21:36.034 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:21:36.282 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:21:36.298 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:36 vm03 bash[20708]: cluster 2026-03-09T20:21:35.221101+0000 mgr.a (mgr.14150) 136 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:36.298 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:36 vm03 bash[20708]: cluster 2026-03-09T20:21:35.221101+0000 mgr.a (mgr.14150) 136 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:36.333 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":21,"num_osds":3,"num_up_osds":3,"osd_up_since":1773087691,"num_in_osds":3,"osd_in_since":1773087674,"num_remapped_pgs":0} 2026-03-09T20:21:36.333 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph osd dump --format=json 2026-03-09T20:21:36.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:36 vm08 bash[23232]: cluster 2026-03-09T20:21:35.221101+0000 mgr.a (mgr.14150) 136 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:36.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:36 vm08 bash[23232]: cluster 2026-03-09T20:21:35.221101+0000 mgr.a (mgr.14150) 136 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:36.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:36 vm04 bash[22793]: cluster 2026-03-09T20:21:35.221101+0000 mgr.a (mgr.14150) 136 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:36.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:36 vm04 bash[22793]: cluster 2026-03-09T20:21:35.221101+0000 mgr.a (mgr.14150) 136 : cluster [DBG] pgmap v93: 1 pgs: 1 unknown; 0 B data, 479 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.279585+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.103:0/796490970' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.279585+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.103:0/796490970' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.819068+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.819068+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.823178+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.823178+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.823956+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.823956+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.827058+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.827058+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.828313+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.828313+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.828762+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.828762+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.832184+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:37 vm03 bash[20708]: audit 2026-03-09T20:21:36.832184+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.279585+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.103:0/796490970' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.279585+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.103:0/796490970' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.819068+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.819068+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.823178+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.823178+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.823956+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.823956+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.827058+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.827058+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.828313+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.828313+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.828762+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.828762+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.832184+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:37 vm08 bash[23232]: audit 2026-03-09T20:21:36.832184+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.279585+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.103:0/796490970' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:21:37.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.279585+0000 mon.b (mon.2) 7 : audit [DBG] from='client.? 192.168.123.103:0/796490970' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json"}]: dispatch 2026-03-09T20:21:37.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.819068+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.819068+0000 mon.a (mon.0) 422 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.823178+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.823178+0000 mon.a (mon.0) 423 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.823956+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:21:37.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.823956+0000 mon.a (mon.0) 424 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:21:37.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.827058+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.827058+0000 mon.a (mon.0) 425 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.828313+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:37.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.828313+0000 mon.a (mon.0) 426 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:21:37.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.828762+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:37.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.828762+0000 mon.a (mon.0) 427 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:21:37.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.832184+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:37.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:37 vm04 bash[22793]: audit 2026-03-09T20:21:36.832184+0000 mon.a (mon.0) 428 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:21:38.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:38 vm08 bash[23232]: cephadm 2026-03-09T20:21:36.812785+0000 mgr.a (mgr.14150) 137 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T20:21:38.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:38 vm08 bash[23232]: cephadm 2026-03-09T20:21:36.812785+0000 mgr.a (mgr.14150) 137 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T20:21:38.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:38 vm08 bash[23232]: cephadm 2026-03-09T20:21:36.824366+0000 mgr.a (mgr.14150) 138 : cephadm [INF] Adjusting osd_memory_target on vm08 to 4551M 2026-03-09T20:21:38.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:38 vm08 bash[23232]: cephadm 2026-03-09T20:21:36.824366+0000 mgr.a (mgr.14150) 138 : cephadm [INF] Adjusting osd_memory_target on vm08 to 4551M 2026-03-09T20:21:38.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:38 vm08 bash[23232]: cluster 2026-03-09T20:21:37.221332+0000 mgr.a (mgr.14150) 139 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:38.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:38 vm08 bash[23232]: cluster 2026-03-09T20:21:37.221332+0000 mgr.a (mgr.14150) 139 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:38.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:38 vm04 bash[22793]: cephadm 2026-03-09T20:21:36.812785+0000 mgr.a (mgr.14150) 137 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T20:21:38.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:38 vm04 bash[22793]: cephadm 2026-03-09T20:21:36.812785+0000 mgr.a (mgr.14150) 137 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T20:21:38.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:38 vm04 bash[22793]: cephadm 2026-03-09T20:21:36.824366+0000 mgr.a (mgr.14150) 138 : cephadm [INF] Adjusting osd_memory_target on vm08 to 4551M 2026-03-09T20:21:38.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:38 vm04 bash[22793]: cephadm 2026-03-09T20:21:36.824366+0000 mgr.a (mgr.14150) 138 : cephadm [INF] Adjusting osd_memory_target on vm08 to 4551M 2026-03-09T20:21:38.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:38 vm04 bash[22793]: cluster 2026-03-09T20:21:37.221332+0000 mgr.a (mgr.14150) 139 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:38.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:38 vm04 bash[22793]: cluster 2026-03-09T20:21:37.221332+0000 mgr.a (mgr.14150) 139 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:38.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:38 vm03 bash[20708]: cephadm 2026-03-09T20:21:36.812785+0000 mgr.a (mgr.14150) 137 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T20:21:38.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:38 vm03 bash[20708]: cephadm 2026-03-09T20:21:36.812785+0000 mgr.a (mgr.14150) 137 : cephadm [INF] Detected new or changed devices on vm08 2026-03-09T20:21:38.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:38 vm03 bash[20708]: cephadm 2026-03-09T20:21:36.824366+0000 mgr.a (mgr.14150) 138 : cephadm [INF] Adjusting osd_memory_target on vm08 to 4551M 2026-03-09T20:21:38.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:38 vm03 bash[20708]: cephadm 2026-03-09T20:21:36.824366+0000 mgr.a (mgr.14150) 138 : cephadm [INF] Adjusting osd_memory_target on vm08 to 4551M 2026-03-09T20:21:38.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:38 vm03 bash[20708]: cluster 2026-03-09T20:21:37.221332+0000 mgr.a (mgr.14150) 139 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:38.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:38 vm03 bash[20708]: cluster 2026-03-09T20:21:37.221332+0000 mgr.a (mgr.14150) 139 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:40.044 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:21:40.295 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:21:40.296 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":21,"fsid":"f72c9476-1bf4-11f1-9f3a-7162c3a72a6d","created":"2026-03-09T20:18:31.513612+0000","modified":"2026-03-09T20:21:34.144639+0000","last_up_change":"2026-03-09T20:21:31.129429+0000","last_in_change":"2026-03-09T20:21:14.133653+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T20:21:31.255297+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"fe2e9dff-b6c3-47c6-b589-1294f3dee050","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":1560508613},{"type":"v1","addr":"192.168.123.103:6803","nonce":1560508613}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":1560508613},{"type":"v1","addr":"192.168.123.103:6805","nonce":1560508613}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":1560508613},{"type":"v1","addr":"192.168.123.103:6809","nonce":1560508613}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":1560508613},{"type":"v1","addr":"192.168.123.103:6807","nonce":1560508613}]},"public_addr":"192.168.123.103:6803/1560508613","cluster_addr":"192.168.123.103:6805/1560508613","heartbeat_back_addr":"192.168.123.103:6809/1560508613","heartbeat_front_addr":"192.168.123.103:6807/1560508613","state":["exists","up"]},{"osd":1,"uuid":"3eb69c4e-b9de-4a57-b23c-633c67090f8d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":19,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":381841990},{"type":"v1","addr":"192.168.123.104:6801","nonce":381841990}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":381841990},{"type":"v1","addr":"192.168.123.104:6803","nonce":381841990}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":381841990},{"type":"v1","addr":"192.168.123.104:6807","nonce":381841990}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":381841990},{"type":"v1","addr":"192.168.123.104:6805","nonce":381841990}]},"public_addr":"192.168.123.104:6801/381841990","cluster_addr":"192.168.123.104:6803/381841990","heartbeat_back_addr":"192.168.123.104:6807/381841990","heartbeat_front_addr":"192.168.123.104:6805/381841990","state":["exists","up"]},{"osd":2,"uuid":"e82b252d-637a-40ce-858c-11bb0bf30bdc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":2180466608},{"type":"v1","addr":"192.168.123.108:6801","nonce":2180466608}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":2180466608},{"type":"v1","addr":"192.168.123.108:6803","nonce":2180466608}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":2180466608},{"type":"v1","addr":"192.168.123.108:6807","nonce":2180466608}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":2180466608},{"type":"v1","addr":"192.168.123.108:6805","nonce":2180466608}]},"public_addr":"192.168.123.108:6801/2180466608","cluster_addr":"192.168.123.108:6803/2180466608","heartbeat_back_addr":"192.168.123.108:6807/2180466608","heartbeat_front_addr":"192.168.123.108:6805/2180466608","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:20:25.664289+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:20:58.549041+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:21:28.599858+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.103:0/2895496693":"2026-03-10T20:18:51.194853+0000","192.168.123.103:0/914222914":"2026-03-10T20:18:51.194853+0000","192.168.123.103:6800/3559043649":"2026-03-10T20:18:51.194853+0000","192.168.123.103:6801/3119412222":"2026-03-10T20:18:41.960547+0000","192.168.123.103:6800/3119412222":"2026-03-10T20:18:41.960547+0000","192.168.123.103:0/3438371388":"2026-03-10T20:18:41.960547+0000","192.168.123.103:0/549070997":"2026-03-10T20:18:51.194853+0000","192.168.123.103:6801/3559043649":"2026-03-10T20:18:51.194853+0000","192.168.123.103:0/3561801670":"2026-03-10T20:18:41.960547+0000","192.168.123.103:0/3652081460":"2026-03-10T20:18:41.960547+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T20:21:40.312 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:40 vm03 bash[20708]: cluster 2026-03-09T20:21:39.221550+0000 mgr.a (mgr.14150) 140 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:40.312 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:40 vm03 bash[20708]: cluster 2026-03-09T20:21:39.221550+0000 mgr.a (mgr.14150) 140 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:40.341 INFO:tasks.cephadm.ceph_manager.ceph:[{'pool': 1, 'pool_name': '.mgr', 'create_time': '2026-03-09T20:21:31.255297+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'is_stretch_pool': False, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '21', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_type': 'Fair distribution', 'score_acting': 3, 'score_stable': 3, 'optimal_score': 1, 'raw_score_acting': 3, 'raw_score_stable': 3, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}] 2026-03-09T20:21:40.341 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph osd pool get .mgr pg_num 2026-03-09T20:21:40.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:40 vm08 bash[23232]: cluster 2026-03-09T20:21:39.221550+0000 mgr.a (mgr.14150) 140 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:40.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:40 vm08 bash[23232]: cluster 2026-03-09T20:21:39.221550+0000 mgr.a (mgr.14150) 140 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:40.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:40 vm04 bash[22793]: cluster 2026-03-09T20:21:39.221550+0000 mgr.a (mgr.14150) 140 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:40.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:40 vm04 bash[22793]: cluster 2026-03-09T20:21:39.221550+0000 mgr.a (mgr.14150) 140 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:41.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:41 vm03 bash[20708]: audit 2026-03-09T20:21:40.295337+0000 mon.a (mon.0) 429 : audit [DBG] from='client.? 192.168.123.103:0/996456132' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:41.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:41 vm03 bash[20708]: audit 2026-03-09T20:21:40.295337+0000 mon.a (mon.0) 429 : audit [DBG] from='client.? 192.168.123.103:0/996456132' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:41.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:41 vm08 bash[23232]: audit 2026-03-09T20:21:40.295337+0000 mon.a (mon.0) 429 : audit [DBG] from='client.? 192.168.123.103:0/996456132' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:41.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:41 vm08 bash[23232]: audit 2026-03-09T20:21:40.295337+0000 mon.a (mon.0) 429 : audit [DBG] from='client.? 192.168.123.103:0/996456132' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:41.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:41 vm04 bash[22793]: audit 2026-03-09T20:21:40.295337+0000 mon.a (mon.0) 429 : audit [DBG] from='client.? 192.168.123.103:0/996456132' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:41.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:41 vm04 bash[22793]: audit 2026-03-09T20:21:40.295337+0000 mon.a (mon.0) 429 : audit [DBG] from='client.? 192.168.123.103:0/996456132' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:42.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:42 vm08 bash[23232]: cluster 2026-03-09T20:21:41.221802+0000 mgr.a (mgr.14150) 141 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:42.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:42 vm08 bash[23232]: cluster 2026-03-09T20:21:41.221802+0000 mgr.a (mgr.14150) 141 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:42.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:42 vm04 bash[22793]: cluster 2026-03-09T20:21:41.221802+0000 mgr.a (mgr.14150) 141 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:42.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:42 vm04 bash[22793]: cluster 2026-03-09T20:21:41.221802+0000 mgr.a (mgr.14150) 141 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:42.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:42 vm03 bash[20708]: cluster 2026-03-09T20:21:41.221802+0000 mgr.a (mgr.14150) 141 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:42.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:42 vm03 bash[20708]: cluster 2026-03-09T20:21:41.221802+0000 mgr.a (mgr.14150) 141 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:44.053 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:21:44.298 INFO:teuthology.orchestra.run.vm03.stdout:pg_num: 1 2026-03-09T20:21:44.315 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:44 vm03 bash[20708]: cluster 2026-03-09T20:21:43.222055+0000 mgr.a (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:44.315 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:44 vm03 bash[20708]: cluster 2026-03-09T20:21:43.222055+0000 mgr.a (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:44.347 INFO:tasks.cephadm:Setting up client nodes... 2026-03-09T20:21:44.348 INFO:tasks.ceph:Waiting until ceph daemons up and pgs clean... 2026-03-09T20:21:44.348 INFO:tasks.cephadm.ceph_manager.ceph:waiting for mgr available 2026-03-09T20:21:44.348 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph mgr dump --format=json 2026-03-09T20:21:44.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:44 vm08 bash[23232]: cluster 2026-03-09T20:21:43.222055+0000 mgr.a (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:44.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:44 vm08 bash[23232]: cluster 2026-03-09T20:21:43.222055+0000 mgr.a (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:44.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:44 vm04 bash[22793]: cluster 2026-03-09T20:21:43.222055+0000 mgr.a (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:44.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:44 vm04 bash[22793]: cluster 2026-03-09T20:21:43.222055+0000 mgr.a (mgr.14150) 142 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:45.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:45 vm03 bash[20708]: audit 2026-03-09T20:21:44.297782+0000 mon.a (mon.0) 430 : audit [DBG] from='client.? 192.168.123.103:0/3877629920' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T20:21:45.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:45 vm03 bash[20708]: audit 2026-03-09T20:21:44.297782+0000 mon.a (mon.0) 430 : audit [DBG] from='client.? 192.168.123.103:0/3877629920' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T20:21:45.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:45 vm08 bash[23232]: audit 2026-03-09T20:21:44.297782+0000 mon.a (mon.0) 430 : audit [DBG] from='client.? 192.168.123.103:0/3877629920' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T20:21:45.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:45 vm08 bash[23232]: audit 2026-03-09T20:21:44.297782+0000 mon.a (mon.0) 430 : audit [DBG] from='client.? 192.168.123.103:0/3877629920' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T20:21:45.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:45 vm04 bash[22793]: audit 2026-03-09T20:21:44.297782+0000 mon.a (mon.0) 430 : audit [DBG] from='client.? 192.168.123.103:0/3877629920' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T20:21:45.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:45 vm04 bash[22793]: audit 2026-03-09T20:21:44.297782+0000 mon.a (mon.0) 430 : audit [DBG] from='client.? 192.168.123.103:0/3877629920' entity='client.admin' cmd=[{"prefix": "osd pool get", "pool": ".mgr", "var": "pg_num"}]: dispatch 2026-03-09T20:21:46.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:46 vm08 bash[23232]: cluster 2026-03-09T20:21:45.222268+0000 mgr.a (mgr.14150) 143 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:46.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:46 vm08 bash[23232]: cluster 2026-03-09T20:21:45.222268+0000 mgr.a (mgr.14150) 143 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:46.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:46 vm04 bash[22793]: cluster 2026-03-09T20:21:45.222268+0000 mgr.a (mgr.14150) 143 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:46.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:46 vm04 bash[22793]: cluster 2026-03-09T20:21:45.222268+0000 mgr.a (mgr.14150) 143 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:46.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:46 vm03 bash[20708]: cluster 2026-03-09T20:21:45.222268+0000 mgr.a (mgr.14150) 143 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:46.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:46 vm03 bash[20708]: cluster 2026-03-09T20:21:45.222268+0000 mgr.a (mgr.14150) 143 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:48.064 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:21:48.321 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:21:48.331 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:48 vm03 bash[20708]: cluster 2026-03-09T20:21:47.222478+0000 mgr.a (mgr.14150) 144 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:48.331 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:48 vm03 bash[20708]: cluster 2026-03-09T20:21:47.222478+0000 mgr.a (mgr.14150) 144 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:48.368 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":14,"flags":0,"active_gid":14150,"active_name":"a","active_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6800","nonce":1916521584},{"type":"v1","addr":"192.168.123.103:6801","nonce":1916521584}]},"active_addr":"192.168.123.103:6801/1916521584","active_change":"2026-03-09T20:18:51.195151+0000","active_mgr_features":4540701547738038271,"available":true,"standbys":[{"gid":24112,"name":"b","mgr_features":4540701547738038271,"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}]}],"modules":["cephadm","dashboard","iostat","nfs","restful"],"available_modules":[{"name":"alerts","can_run":true,"error_string":"","module_options":{"interval":{"name":"interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"How frequently to reexamine health status","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"smtp_destination":{"name":"smtp_destination","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Email address to send alerts to","long_desc":"","tags":[],"see_also":[]},"smtp_from_name":{"name":"smtp_from_name","type":"str","level":"advanced","flags":1,"default_value":"Ceph","min":"","max":"","enum_allowed":[],"desc":"Email From: name","long_desc":"","tags":[],"see_also":[]},"smtp_host":{"name":"smtp_host","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_password":{"name":"smtp_password","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Password to authenticate with","long_desc":"","tags":[],"see_also":[]},"smtp_port":{"name":"smtp_port","type":"int","level":"advanced","flags":1,"default_value":"465","min":"","max":"","enum_allowed":[],"desc":"SMTP port","long_desc":"","tags":[],"see_also":[]},"smtp_sender":{"name":"smtp_sender","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"SMTP envelope sender","long_desc":"","tags":[],"see_also":[]},"smtp_ssl":{"name":"smtp_ssl","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Use SSL to connect to SMTP server","long_desc":"","tags":[],"see_also":[]},"smtp_user":{"name":"smtp_user","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"User to authenticate as","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"balancer","can_run":true,"error_string":"","module_options":{"active":{"name":"active","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"automatically balance PGs across cluster","long_desc":"","tags":[],"see_also":[]},"begin_time":{"name":"begin_time","type":"str","level":"advanced","flags":1,"default_value":"0000","min":"","max":"","enum_allowed":[],"desc":"beginning time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"begin_weekday":{"name":"begin_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to this day of the week or later","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"crush_compat_max_iterations":{"name":"crush_compat_max_iterations","type":"uint","level":"advanced","flags":1,"default_value":"25","min":"1","max":"250","enum_allowed":[],"desc":"maximum number of iterations to attempt optimization","long_desc":"","tags":[],"see_also":[]},"crush_compat_metrics":{"name":"crush_compat_metrics","type":"str","level":"advanced","flags":1,"default_value":"pgs,objects,bytes","min":"","max":"","enum_allowed":[],"desc":"metrics with which to calculate OSD utilization","long_desc":"Value is a list of one or more of \"pgs\", \"objects\", or \"bytes\", and indicates which metrics to use to balance utilization.","tags":[],"see_also":[]},"crush_compat_step":{"name":"crush_compat_step","type":"float","level":"advanced","flags":1,"default_value":"0.5","min":"0.001","max":"0.999","enum_allowed":[],"desc":"aggressiveness of optimization","long_desc":".99 is very aggressive, .01 is less aggressive","tags":[],"see_also":[]},"end_time":{"name":"end_time","type":"str","level":"advanced","flags":1,"default_value":"2359","min":"","max":"","enum_allowed":[],"desc":"ending time of day to automatically balance","long_desc":"This is a time of day in the format HHMM.","tags":[],"see_also":[]},"end_weekday":{"name":"end_weekday","type":"uint","level":"advanced","flags":1,"default_value":"0","min":"0","max":"6","enum_allowed":[],"desc":"Restrict automatic balancing to days of the week earlier than this","long_desc":"0 = Sunday, 1 = Monday, etc.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_score":{"name":"min_score","type":"float","level":"advanced","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"minimum score, below which no optimization is attempted","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":1,"default_value":"upmap","min":"","max":"","enum_allowed":["crush-compat","none","read","upmap","upmap-read"],"desc":"Balancer mode","long_desc":"","tags":[],"see_also":[]},"pool_ids":{"name":"pool_ids","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"pools which the automatic balancing will be limited to","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and attempt optimization","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"update_pg_upmap_activity":{"name":"update_pg_upmap_activity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Updates pg_upmap activity stats to be used in `balancer status detail`","long_desc":"","tags":[],"see_also":[]},"upmap_max_deviation":{"name":"upmap_max_deviation","type":"int","level":"advanced","flags":1,"default_value":"5","min":"1","max":"","enum_allowed":[],"desc":"deviation below which no optimization is attempted","long_desc":"If the number of PGs are within this count then no optimization is attempted","tags":[],"see_also":[]},"upmap_max_optimizations":{"name":"upmap_max_optimizations","type":"uint","level":"advanced","flags":1,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"maximum upmap optimizations to make per attempt","long_desc":"","tags":[],"see_also":[]}}},{"name":"cephadm","can_run":true,"error_string":"","module_options":{"agent_down_multiplier":{"name":"agent_down_multiplier","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"","max":"","enum_allowed":[],"desc":"Multiplied by agent refresh rate to calculate how long agent must not report before being marked down","long_desc":"","tags":[],"see_also":[]},"agent_refresh_rate":{"name":"agent_refresh_rate","type":"secs","level":"advanced","flags":0,"default_value":"20","min":"","max":"","enum_allowed":[],"desc":"How often agent on each host will try to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"agent_starting_port":{"name":"agent_starting_port","type":"int","level":"advanced","flags":0,"default_value":"4721","min":"","max":"","enum_allowed":[],"desc":"First port agent will try to bind to (will also try up to next 1000 subsequent ports if blocked)","long_desc":"","tags":[],"see_also":[]},"allow_ptrace":{"name":"allow_ptrace","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow SYS_PTRACE capability on ceph containers","long_desc":"The SYS_PTRACE capability is needed to attach to a process with gdb or strace. Enabling this options can allow debugging daemons that encounter problems at runtime.","tags":[],"see_also":[]},"autotune_interval":{"name":"autotune_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to autotune daemon memory","long_desc":"","tags":[],"see_also":[]},"autotune_memory_target_ratio":{"name":"autotune_memory_target_ratio","type":"float","level":"advanced","flags":0,"default_value":"0.7","min":"","max":"","enum_allowed":[],"desc":"ratio of total system memory to divide amongst autotuned daemons","long_desc":"","tags":[],"see_also":[]},"cephadm_log_destination":{"name":"cephadm_log_destination","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":["file","file,syslog","syslog"],"desc":"Destination for cephadm command's persistent logging","long_desc":"","tags":[],"see_also":[]},"cgroups_split":{"name":"cgroups_split","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Pass --cgroups=split when cephadm creates containers (currently podman only)","long_desc":"","tags":[],"see_also":[]},"config_checks_enabled":{"name":"config_checks_enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable or disable the cephadm configuration analysis","long_desc":"","tags":[],"see_also":[]},"config_dashboard":{"name":"config_dashboard","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"manage configs like API endpoints in Dashboard.","long_desc":"","tags":[],"see_also":[]},"container_image_alertmanager":{"name":"container_image_alertmanager","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/alertmanager:v0.25.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_base":{"name":"container_image_base","type":"str","level":"advanced","flags":1,"default_value":"quay.io/ceph/ceph","min":"","max":"","enum_allowed":[],"desc":"Container image name, without the tag","long_desc":"","tags":[],"see_also":[]},"container_image_elasticsearch":{"name":"container_image_elasticsearch","type":"str","level":"advanced","flags":0,"default_value":"quay.io/omrizeneva/elasticsearch:6.8.23","min":"","max":"","enum_allowed":[],"desc":"elasticsearch container image","long_desc":"","tags":[],"see_also":[]},"container_image_grafana":{"name":"container_image_grafana","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/grafana:10.4.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_haproxy":{"name":"container_image_haproxy","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/haproxy:2.3","min":"","max":"","enum_allowed":[],"desc":"HAproxy container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_agent":{"name":"container_image_jaeger_agent","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-agent:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger agent container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_collector":{"name":"container_image_jaeger_collector","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-collector:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger collector container image","long_desc":"","tags":[],"see_also":[]},"container_image_jaeger_query":{"name":"container_image_jaeger_query","type":"str","level":"advanced","flags":0,"default_value":"quay.io/jaegertracing/jaeger-query:1.29","min":"","max":"","enum_allowed":[],"desc":"Jaeger query container image","long_desc":"","tags":[],"see_also":[]},"container_image_keepalived":{"name":"container_image_keepalived","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/keepalived:2.2.4","min":"","max":"","enum_allowed":[],"desc":"Keepalived container image","long_desc":"","tags":[],"see_also":[]},"container_image_loki":{"name":"container_image_loki","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/loki:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Loki container image","long_desc":"","tags":[],"see_also":[]},"container_image_node_exporter":{"name":"container_image_node_exporter","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/node-exporter:v1.7.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_nvmeof":{"name":"container_image_nvmeof","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/nvmeof:1.2.5","min":"","max":"","enum_allowed":[],"desc":"Nvme-of container image","long_desc":"","tags":[],"see_also":[]},"container_image_prometheus":{"name":"container_image_prometheus","type":"str","level":"advanced","flags":0,"default_value":"quay.io/prometheus/prometheus:v2.51.0","min":"","max":"","enum_allowed":[],"desc":"Prometheus container image","long_desc":"","tags":[],"see_also":[]},"container_image_promtail":{"name":"container_image_promtail","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/promtail:3.0.0","min":"","max":"","enum_allowed":[],"desc":"Promtail container image","long_desc":"","tags":[],"see_also":[]},"container_image_samba":{"name":"container_image_samba","type":"str","level":"advanced","flags":0,"default_value":"quay.io/samba.org/samba-server:devbuilds-centos-amd64","min":"","max":"","enum_allowed":[],"desc":"Samba/SMB container image","long_desc":"","tags":[],"see_also":[]},"container_image_snmp_gateway":{"name":"container_image_snmp_gateway","type":"str","level":"advanced","flags":0,"default_value":"quay.io/ceph/snmp-notifier:v1.2.1","min":"","max":"","enum_allowed":[],"desc":"SNMP Gateway container image","long_desc":"","tags":[],"see_also":[]},"container_init":{"name":"container_init","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Run podman/docker with `--init`","long_desc":"","tags":[],"see_also":[]},"daemon_cache_timeout":{"name":"daemon_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"seconds to cache service (daemon) inventory","long_desc":"","tags":[],"see_also":[]},"default_cephadm_command_timeout":{"name":"default_cephadm_command_timeout","type":"int","level":"advanced","flags":0,"default_value":"900","min":"","max":"","enum_allowed":[],"desc":"Default timeout applied to cephadm commands run directly on the host (in seconds)","long_desc":"","tags":[],"see_also":[]},"default_registry":{"name":"default_registry","type":"str","level":"advanced","flags":0,"default_value":"quay.io","min":"","max":"","enum_allowed":[],"desc":"Search-registry to which we should normalize unqualified image names. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"device_cache_timeout":{"name":"device_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"1800","min":"","max":"","enum_allowed":[],"desc":"seconds to cache device inventory","long_desc":"","tags":[],"see_also":[]},"device_enhanced_scan":{"name":"device_enhanced_scan","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use libstoragemgmt during device scans","long_desc":"","tags":[],"see_also":[]},"facts_cache_timeout":{"name":"facts_cache_timeout","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"seconds to cache host facts data","long_desc":"","tags":[],"see_also":[]},"grafana_dashboards_path":{"name":"grafana_dashboards_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/grafana/dashboards/ceph-dashboard/","min":"","max":"","enum_allowed":[],"desc":"location of dashboards to include in grafana deployments","long_desc":"","tags":[],"see_also":[]},"host_check_interval":{"name":"host_check_interval","type":"secs","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to perform a host check","long_desc":"","tags":[],"see_also":[]},"hw_monitoring":{"name":"hw_monitoring","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Deploy hw monitoring daemon on every host.","long_desc":"","tags":[],"see_also":[]},"inventory_list_all":{"name":"inventory_list_all","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Whether ceph-volume inventory should report more devices (mostly mappers (LVs / mpaths), partitions...)","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_refresh_metadata":{"name":"log_refresh_metadata","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Log all refresh metadata. Includes daemon, device, and host info collected regularly. Only has effect if logging at debug level","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"log to the \"cephadm\" cluster log channel\"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf":{"name":"manage_etc_ceph_ceph_conf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Manage and own /etc/ceph/ceph.conf on the hosts.","long_desc":"","tags":[],"see_also":[]},"manage_etc_ceph_ceph_conf_hosts":{"name":"manage_etc_ceph_ceph_conf_hosts","type":"str","level":"advanced","flags":0,"default_value":"*","min":"","max":"","enum_allowed":[],"desc":"PlacementSpec describing on which hosts to manage /etc/ceph/ceph.conf","long_desc":"","tags":[],"see_also":[]},"max_count_per_host":{"name":"max_count_per_host","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of daemons per service per host","long_desc":"","tags":[],"see_also":[]},"max_osd_draining_count":{"name":"max_osd_draining_count","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"max number of osds that will be drained simultaneously when osds are removed","long_desc":"","tags":[],"see_also":[]},"migration_current":{"name":"migration_current","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"internal - do not modify","long_desc":"","tags":[],"see_also":[]},"mode":{"name":"mode","type":"str","level":"advanced","flags":0,"default_value":"root","min":"","max":"","enum_allowed":["cephadm-package","root"],"desc":"mode for remote execution of cephadm","long_desc":"","tags":[],"see_also":[]},"oob_default_addr":{"name":"oob_default_addr","type":"str","level":"advanced","flags":0,"default_value":"169.254.1.1","min":"","max":"","enum_allowed":[],"desc":"Default address for RedFish API (oob management).","long_desc":"","tags":[],"see_also":[]},"prometheus_alerts_path":{"name":"prometheus_alerts_path","type":"str","level":"advanced","flags":0,"default_value":"/etc/prometheus/ceph/ceph_default_alerts.yml","min":"","max":"","enum_allowed":[],"desc":"location of alerts to include in prometheus deployments","long_desc":"","tags":[],"see_also":[]},"registry_insecure":{"name":"registry_insecure","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Registry is to be considered insecure (no TLS available). Only for development purposes.","long_desc":"","tags":[],"see_also":[]},"registry_password":{"name":"registry_password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository password. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"registry_url":{"name":"registry_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Registry url for login purposes. This is not the default registry","long_desc":"","tags":[],"see_also":[]},"registry_username":{"name":"registry_username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"Custom repository username. Only used for logging into a registry.","long_desc":"","tags":[],"see_also":[]},"secure_monitoring_stack":{"name":"secure_monitoring_stack","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable TLS security for all the monitoring stack daemons","long_desc":"","tags":[],"see_also":[]},"service_discovery_port":{"name":"service_discovery_port","type":"int","level":"advanced","flags":0,"default_value":"8765","min":"","max":"","enum_allowed":[],"desc":"cephadm service discovery port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssh_config_file":{"name":"ssh_config_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"customized SSH config file to connect to managed hosts","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_count_max":{"name":"ssh_keepalive_count_max","type":"int","level":"advanced","flags":0,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"How many times ssh connections can fail liveness checks before the host is marked offline","long_desc":"","tags":[],"see_also":[]},"ssh_keepalive_interval":{"name":"ssh_keepalive_interval","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"How often ssh connections are checked for liveness","long_desc":"","tags":[],"see_also":[]},"use_agent":{"name":"use_agent","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Use cephadm agent on each host to gather and send metadata","long_desc":"","tags":[],"see_also":[]},"use_repo_digest":{"name":"use_repo_digest","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Automatically convert image tags to image digest. Make sure all daemons use the same image","long_desc":"","tags":[],"see_also":[]},"warn_on_failed_host_check":{"name":"warn_on_failed_host_check","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if the host check fails","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_daemons":{"name":"warn_on_stray_daemons","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected that are not managed by cephadm","long_desc":"","tags":[],"see_also":[]},"warn_on_stray_hosts":{"name":"warn_on_stray_hosts","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"raise a health warning if daemons are detected on a host that is not managed by cephadm","long_desc":"","tags":[],"see_also":[]}}},{"name":"crash","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"retain_interval":{"name":"retain_interval","type":"secs","level":"advanced","flags":1,"default_value":"31536000","min":"","max":"","enum_allowed":[],"desc":"how long to retain crashes before pruning them","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_recent_interval":{"name":"warn_recent_interval","type":"secs","level":"advanced","flags":1,"default_value":"1209600","min":"","max":"","enum_allowed":[],"desc":"time interval in which to warn about recent crashes","long_desc":"","tags":[],"see_also":[]}}},{"name":"dashboard","can_run":true,"error_string":"","module_options":{"ACCOUNT_LOCKOUT_ATTEMPTS":{"name":"ACCOUNT_LOCKOUT_ATTEMPTS","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_HOST":{"name":"ALERTMANAGER_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ALERTMANAGER_API_SSL_VERIFY":{"name":"ALERTMANAGER_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_ENABLED":{"name":"AUDIT_API_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"AUDIT_API_LOG_PAYLOAD":{"name":"AUDIT_API_LOG_PAYLOAD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ENABLE_BROWSABLE_API":{"name":"ENABLE_BROWSABLE_API","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_CEPHFS":{"name":"FEATURE_TOGGLE_CEPHFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_DASHBOARD":{"name":"FEATURE_TOGGLE_DASHBOARD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_ISCSI":{"name":"FEATURE_TOGGLE_ISCSI","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_MIRRORING":{"name":"FEATURE_TOGGLE_MIRRORING","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_NFS":{"name":"FEATURE_TOGGLE_NFS","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RBD":{"name":"FEATURE_TOGGLE_RBD","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"FEATURE_TOGGLE_RGW":{"name":"FEATURE_TOGGLE_RGW","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE":{"name":"GANESHA_CLUSTERS_RADOS_POOL_NAMESPACE","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_PASSWORD":{"name":"GRAFANA_API_PASSWORD","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_SSL_VERIFY":{"name":"GRAFANA_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_URL":{"name":"GRAFANA_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_API_USERNAME":{"name":"GRAFANA_API_USERNAME","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_FRONTEND_API_URL":{"name":"GRAFANA_FRONTEND_API_URL","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"GRAFANA_UPDATE_DASHBOARDS":{"name":"GRAFANA_UPDATE_DASHBOARDS","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISCSI_API_SSL_VERIFICATION":{"name":"ISCSI_API_SSL_VERIFICATION","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ISSUE_TRACKER_API_KEY":{"name":"ISSUE_TRACKER_API_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_HOST":{"name":"PROMETHEUS_API_HOST","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PROMETHEUS_API_SSL_VERIFY":{"name":"PROMETHEUS_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_COMPLEXITY_ENABLED":{"name":"PWD_POLICY_CHECK_COMPLEXITY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED":{"name":"PWD_POLICY_CHECK_EXCLUSION_LIST_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_LENGTH_ENABLED":{"name":"PWD_POLICY_CHECK_LENGTH_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_OLDPWD_ENABLED":{"name":"PWD_POLICY_CHECK_OLDPWD_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_REPETITIVE_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED":{"name":"PWD_POLICY_CHECK_SEQUENTIAL_CHARS_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_CHECK_USERNAME_ENABLED":{"name":"PWD_POLICY_CHECK_USERNAME_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_ENABLED":{"name":"PWD_POLICY_ENABLED","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_EXCLUSION_LIST":{"name":"PWD_POLICY_EXCLUSION_LIST","type":"str","level":"advanced","flags":0,"default_value":"osd,host,dashboard,pool,block,nfs,ceph,monitors,gateway,logs,crush,maps","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_COMPLEXITY":{"name":"PWD_POLICY_MIN_COMPLEXITY","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"PWD_POLICY_MIN_LENGTH":{"name":"PWD_POLICY_MIN_LENGTH","type":"int","level":"advanced","flags":0,"default_value":"8","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"REST_REQUESTS_TIMEOUT":{"name":"REST_REQUESTS_TIMEOUT","type":"int","level":"advanced","flags":0,"default_value":"45","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ACCESS_KEY":{"name":"RGW_API_ACCESS_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_ADMIN_RESOURCE":{"name":"RGW_API_ADMIN_RESOURCE","type":"str","level":"advanced","flags":0,"default_value":"admin","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SECRET_KEY":{"name":"RGW_API_SECRET_KEY","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"RGW_API_SSL_VERIFY":{"name":"RGW_API_SSL_VERIFY","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_SPAN":{"name":"USER_PWD_EXPIRATION_SPAN","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_1":{"name":"USER_PWD_EXPIRATION_WARNING_1","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"USER_PWD_EXPIRATION_WARNING_2":{"name":"USER_PWD_EXPIRATION_WARNING_2","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"cross_origin_url":{"name":"cross_origin_url","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"crt_file":{"name":"crt_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"debug":{"name":"debug","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Enable/disable debug options","long_desc":"","tags":[],"see_also":[]},"jwt_token_ttl":{"name":"jwt_token_ttl","type":"int","level":"advanced","flags":0,"default_value":"28800","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"motd":{"name":"motd","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"The message of the day","long_desc":"","tags":[],"see_also":[]},"redirect_resolve_ip_addr":{"name":"redirect_resolve_ip_addr","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":0,"default_value":"8080","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl_server_port":{"name":"ssl_server_port","type":"int","level":"advanced","flags":0,"default_value":"8443","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":0,"default_value":"redirect","min":"","max":"","enum_allowed":["error","redirect"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":0,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url_prefix":{"name":"url_prefix","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"devicehealth","can_run":true,"error_string":"","module_options":{"enable_monitoring":{"name":"enable_monitoring","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"monitor device health metrics","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mark_out_threshold":{"name":"mark_out_threshold","type":"secs","level":"advanced","flags":1,"default_value":"2419200","min":"","max":"","enum_allowed":[],"desc":"automatically mark OSD if it may fail before this long","long_desc":"","tags":[],"see_also":[]},"pool_name":{"name":"pool_name","type":"str","level":"advanced","flags":1,"default_value":"device_health_metrics","min":"","max":"","enum_allowed":[],"desc":"name of pool in which to store device health metrics","long_desc":"","tags":[],"see_also":[]},"retention_period":{"name":"retention_period","type":"secs","level":"advanced","flags":1,"default_value":"15552000","min":"","max":"","enum_allowed":[],"desc":"how long to retain device health metrics","long_desc":"","tags":[],"see_also":[]},"scrape_frequency":{"name":"scrape_frequency","type":"secs","level":"advanced","flags":1,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"how frequently to scrape device health metrics","long_desc":"","tags":[],"see_also":[]},"self_heal":{"name":"self_heal","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"preemptively heal cluster around devices that may fail","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"how frequently to wake up and check device health","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"warn_threshold":{"name":"warn_threshold","type":"secs","level":"advanced","flags":1,"default_value":"7257600","min":"","max":"","enum_allowed":[],"desc":"raise health warning if OSD may fail before this long","long_desc":"","tags":[],"see_also":[]}}},{"name":"diskprediction_local","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predict_interval":{"name":"predict_interval","type":"str","level":"advanced","flags":0,"default_value":"86400","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"predictor_model":{"name":"predictor_model","type":"str","level":"advanced","flags":0,"default_value":"prophetstor","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"str","level":"advanced","flags":0,"default_value":"600","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"influx","can_run":false,"error_string":"influxdb python module not found","module_options":{"batch_size":{"name":"batch_size","type":"int","level":"advanced","flags":0,"default_value":"5000","min":"","max":"","enum_allowed":[],"desc":"How big batches of data points should be when sending to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"database":{"name":"database","type":"str","level":"advanced","flags":0,"default_value":"ceph","min":"","max":"","enum_allowed":[],"desc":"InfluxDB database name. You will need to create this database and grant write privileges to the configured username or the username must have admin privileges to create it.","long_desc":"","tags":[],"see_also":[]},"hostname":{"name":"hostname","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server hostname","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"30","min":"5","max":"","enum_allowed":[],"desc":"Time between reports to InfluxDB. Default 30 seconds.","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"password":{"name":"password","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"password of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"port":{"name":"port","type":"int","level":"advanced","flags":0,"default_value":"8086","min":"","max":"","enum_allowed":[],"desc":"InfluxDB server port","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"ssl":{"name":"ssl","type":"str","level":"advanced","flags":0,"default_value":"false","min":"","max":"","enum_allowed":[],"desc":"Use https connection for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]},"threads":{"name":"threads","type":"int","level":"advanced","flags":0,"default_value":"5","min":"1","max":"32","enum_allowed":[],"desc":"How many worker threads should be spawned for sending data to InfluxDB.","long_desc":"","tags":[],"see_also":[]},"username":{"name":"username","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"username of InfluxDB server user","long_desc":"","tags":[],"see_also":[]},"verify_ssl":{"name":"verify_ssl","type":"str","level":"advanced","flags":0,"default_value":"true","min":"","max":"","enum_allowed":[],"desc":"Verify https cert for InfluxDB server. Use \"true\" or \"false\".","long_desc":"","tags":[],"see_also":[]}}},{"name":"insights","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"iostat","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"k8sevents","can_run":true,"error_string":"","module_options":{"ceph_event_retention_days":{"name":"ceph_event_retention_days","type":"int","level":"advanced","flags":0,"default_value":"7","min":"","max":"","enum_allowed":[],"desc":"Days to hold ceph event information within local cache","long_desc":"","tags":[],"see_also":[]},"config_check_secs":{"name":"config_check_secs","type":"int","level":"advanced","flags":0,"default_value":"10","min":"10","max":"","enum_allowed":[],"desc":"interval (secs) to check for cluster configuration changes","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"localpool","can_run":true,"error_string":"","module_options":{"failure_domain":{"name":"failure_domain","type":"str","level":"advanced","flags":1,"default_value":"host","min":"","max":"","enum_allowed":[],"desc":"failure domain for any created local pool","long_desc":"what failure domain we should separate data replicas across.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"min_size":{"name":"min_size","type":"int","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"default min_size for any created local pool","long_desc":"value to set min_size to (unchanged from Ceph's default if this option is not set)","tags":[],"see_also":[]},"num_rep":{"name":"num_rep","type":"int","level":"advanced","flags":1,"default_value":"3","min":"","max":"","enum_allowed":[],"desc":"default replica count for any created local pool","long_desc":"","tags":[],"see_also":[]},"pg_num":{"name":"pg_num","type":"int","level":"advanced","flags":1,"default_value":"128","min":"","max":"","enum_allowed":[],"desc":"default pg_num for any created local pool","long_desc":"","tags":[],"see_also":[]},"prefix":{"name":"prefix","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"name prefix for any created local pool","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"subtree":{"name":"subtree","type":"str","level":"advanced","flags":1,"default_value":"rack","min":"","max":"","enum_allowed":[],"desc":"CRUSH level for which to create a local pool","long_desc":"which CRUSH subtree type the module should create a pool for.","tags":[],"see_also":[]}}},{"name":"mds_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"mirroring","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"nfs","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"orchestrator","can_run":true,"error_string":"","module_options":{"fail_fs":{"name":"fail_fs","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Fail filesystem for rapid multi-rank mds upgrade","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"orchestrator":{"name":"orchestrator","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["cephadm","rook","test_orchestrator"],"desc":"Orchestrator backend","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_perf_query","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"osd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"pg_autoscaler","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"threshold":{"name":"threshold","type":"float","level":"advanced","flags":0,"default_value":"3.0","min":"1.0","max":"","enum_allowed":[],"desc":"scaling threshold","long_desc":"The factor by which the `NEW PG_NUM` must vary from the current`PG_NUM` before being accepted. Cannot be less than 1.0","tags":[],"see_also":[]}}},{"name":"progress","can_run":true,"error_string":"","module_options":{"allow_pg_recovery_event":{"name":"allow_pg_recovery_event","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow the module to show pg recovery progress","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_completed_events":{"name":"max_completed_events","type":"int","level":"advanced","flags":1,"default_value":"50","min":"","max":"","enum_allowed":[],"desc":"number of past completed events to remember","long_desc":"","tags":[],"see_also":[]},"sleep_interval":{"name":"sleep_interval","type":"secs","level":"advanced","flags":1,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"how long the module is going to sleep","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"prometheus","can_run":true,"error_string":"","module_options":{"cache":{"name":"cache","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"exclude_perf_counters":{"name":"exclude_perf_counters","type":"bool","level":"advanced","flags":1,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Do not include perf-counters in the metrics output","long_desc":"Gathering perf-counters from a single Prometheus exporter can degrade ceph-mgr performance, especially in large clusters. Instead, Ceph-exporter daemons are now used by default for perf-counter gathering. This should only be disabled when no ceph-exporters are deployed.","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools":{"name":"rbd_stats_pools","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rbd_stats_pools_refresh_interval":{"name":"rbd_stats_pools_refresh_interval","type":"int","level":"advanced","flags":0,"default_value":"300","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"scrape_interval":{"name":"scrape_interval","type":"float","level":"advanced","flags":0,"default_value":"15.0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"::","min":"","max":"","enum_allowed":[],"desc":"the IPv4 or IPv6 address on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"int","level":"advanced","flags":1,"default_value":"9283","min":"","max":"","enum_allowed":[],"desc":"the port on which the module listens for HTTP requests","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"stale_cache_strategy":{"name":"stale_cache_strategy","type":"str","level":"advanced","flags":0,"default_value":"log","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_behaviour":{"name":"standby_behaviour","type":"str","level":"advanced","flags":1,"default_value":"default","min":"","max":"","enum_allowed":["default","error"],"desc":"","long_desc":"","tags":[],"see_also":[]},"standby_error_status_code":{"name":"standby_error_status_code","type":"int","level":"advanced","flags":1,"default_value":"500","min":"400","max":"599","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rbd_support","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_snap_create":{"name":"max_concurrent_snap_create","type":"int","level":"advanced","flags":0,"default_value":"10","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"mirror_snapshot_schedule":{"name":"mirror_snapshot_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"trash_purge_schedule":{"name":"trash_purge_schedule","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"restful","can_run":true,"error_string":"","module_options":{"enable_auth":{"name":"enable_auth","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"key_file":{"name":"key_file","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_requests":{"name":"max_requests","type":"int","level":"advanced","flags":0,"default_value":"500","min":"","max":"","enum_allowed":[],"desc":"Maximum number of requests to keep in memory. When new request comes in, the oldest request will be removed if the number of requests exceeds the max request number. if un-finished request is removed, error message will be logged in the ceph-mgr log.","long_desc":"","tags":[],"see_also":[]},"server_addr":{"name":"server_addr","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"server_port":{"name":"server_port","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rgw","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"secondary_zone_period_retry_limit":{"name":"secondary_zone_period_retry_limit","type":"int","level":"advanced","flags":0,"default_value":"5","min":"","max":"","enum_allowed":[],"desc":"RGW module period update retry limit for secondary site","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"rook","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"storage_class":{"name":"storage_class","type":"str","level":"advanced","flags":0,"default_value":"local","min":"","max":"","enum_allowed":[],"desc":"storage class name for LSO-discovered PVs","long_desc":"","tags":[],"see_also":[]}}},{"name":"selftest","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption1":{"name":"roption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"roption2":{"name":"roption2","type":"str","level":"advanced","flags":0,"default_value":"xyz","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption1":{"name":"rwoption1","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption2":{"name":"rwoption2","type":"int","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption3":{"name":"rwoption3","type":"float","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption4":{"name":"rwoption4","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption5":{"name":"rwoption5","type":"bool","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption6":{"name":"rwoption6","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"rwoption7":{"name":"rwoption7","type":"int","level":"advanced","flags":0,"default_value":"","min":"1","max":"42","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testkey":{"name":"testkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testlkey":{"name":"testlkey","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"testnewline":{"name":"testnewline","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"snap_schedule","can_run":true,"error_string":"","module_options":{"allow_m_granularity":{"name":"allow_m_granularity","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"allow minute scheduled snapshots","long_desc":"","tags":[],"see_also":[]},"dump_on_update":{"name":"dump_on_update","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"dump database to debug log on update","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"stats","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"status","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telegraf","can_run":true,"error_string":"","module_options":{"address":{"name":"address","type":"str","level":"advanced","flags":0,"default_value":"unixgram:///tmp/telegraf.sock","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"15","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"telemetry","can_run":true,"error_string":"","module_options":{"channel_basic":{"name":"channel_basic","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share basic cluster information (size, version)","long_desc":"","tags":[],"see_also":[]},"channel_crash":{"name":"channel_crash","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share metadata about Ceph daemon crashes (version, stack straces, etc)","long_desc":"","tags":[],"see_also":[]},"channel_device":{"name":"channel_device","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)","long_desc":"","tags":[],"see_also":[]},"channel_ident":{"name":"channel_ident","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share a user-provided description and/or contact email for the cluster","long_desc":"","tags":[],"see_also":[]},"channel_perf":{"name":"channel_perf","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Share various performance metrics of a cluster","long_desc":"","tags":[],"see_also":[]},"contact":{"name":"contact","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"description":{"name":"description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"device_url":{"name":"device_url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/device","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"enabled":{"name":"enabled","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"int","level":"advanced","flags":0,"default_value":"24","min":"8","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"last_opt_revision":{"name":"last_opt_revision","type":"int","level":"advanced","flags":0,"default_value":"1","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard":{"name":"leaderboard","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"leaderboard_description":{"name":"leaderboard_description","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"organization":{"name":"organization","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"proxy":{"name":"proxy","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"url":{"name":"url","type":"str","level":"advanced","flags":0,"default_value":"https://telemetry.ceph.com/report","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"test_orchestrator","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"volumes","can_run":true,"error_string":"","module_options":{"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"max_concurrent_clones":{"name":"max_concurrent_clones","type":"int","level":"advanced","flags":0,"default_value":"4","min":"","max":"","enum_allowed":[],"desc":"Number of asynchronous cloner threads","long_desc":"","tags":[],"see_also":[]},"periodic_async_work":{"name":"periodic_async_work","type":"bool","level":"advanced","flags":0,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"Periodically check for async work","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_delay":{"name":"snapshot_clone_delay","type":"int","level":"advanced","flags":0,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"Delay clone begin operation by snapshot_clone_delay seconds","long_desc":"","tags":[],"see_also":[]},"snapshot_clone_no_wait":{"name":"snapshot_clone_no_wait","type":"bool","level":"advanced","flags":0,"default_value":"True","min":"","max":"","enum_allowed":[],"desc":"Reject subvolume clone request when cloner threads are busy","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}},{"name":"zabbix","can_run":true,"error_string":"","module_options":{"discovery_interval":{"name":"discovery_interval","type":"uint","level":"advanced","flags":0,"default_value":"100","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"identifier":{"name":"identifier","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"interval":{"name":"interval","type":"secs","level":"advanced","flags":0,"default_value":"60","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_level":{"name":"log_level","type":"str","level":"advanced","flags":1,"default_value":"","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster":{"name":"log_to_cluster","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_cluster_level":{"name":"log_to_cluster_level","type":"str","level":"advanced","flags":1,"default_value":"info","min":"","max":"","enum_allowed":["","critical","debug","error","info","warning"],"desc":"","long_desc":"","tags":[],"see_also":[]},"log_to_file":{"name":"log_to_file","type":"bool","level":"advanced","flags":1,"default_value":"False","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"sqlite3_killpoint":{"name":"sqlite3_killpoint","type":"int","level":"dev","flags":1,"default_value":"0","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_host":{"name":"zabbix_host","type":"str","level":"advanced","flags":0,"default_value":"","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_port":{"name":"zabbix_port","type":"int","level":"advanced","flags":0,"default_value":"10051","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]},"zabbix_sender":{"name":"zabbix_sender","type":"str","level":"advanced","flags":0,"default_value":"/usr/bin/zabbix_sender","min":"","max":"","enum_allowed":[],"desc":"","long_desc":"","tags":[],"see_also":[]}}}],"services":{"dashboard":"https://192.168.123.103:8443/"},"always_on_modules":{"octopus":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"pacific":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"quincy":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"reef":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"],"squid":["balancer","crash","devicehealth","orchestrator","pg_autoscaler","progress","rbd_support","status","telemetry","volumes"]},"force_disabled_modules":{},"last_failure_osd_epoch":3,"active_clients":[{"name":"devicehealth","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":2088389572}]},{"name":"libcephsqlite","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":3051561080}]},{"name":"rbd_support","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":3914606762}]},{"name":"volumes","addrvec":[{"type":"v2","addr":"192.168.123.103:0","nonce":2372012907}]}]} 2026-03-09T20:21:48.370 INFO:tasks.cephadm.ceph_manager.ceph:mgr available! 2026-03-09T20:21:48.370 INFO:tasks.cephadm.ceph_manager.ceph:waiting for all up 2026-03-09T20:21:48.370 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph osd dump --format=json 2026-03-09T20:21:48.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:48 vm04 bash[22793]: cluster 2026-03-09T20:21:47.222478+0000 mgr.a (mgr.14150) 144 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:48.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:48 vm04 bash[22793]: cluster 2026-03-09T20:21:47.222478+0000 mgr.a (mgr.14150) 144 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:48.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:48 vm08 bash[23232]: cluster 2026-03-09T20:21:47.222478+0000 mgr.a (mgr.14150) 144 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:48.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:48 vm08 bash[23232]: cluster 2026-03-09T20:21:47.222478+0000 mgr.a (mgr.14150) 144 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:49.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:49 vm03 bash[20708]: audit 2026-03-09T20:21:48.319454+0000 mon.a (mon.0) 431 : audit [DBG] from='client.? 192.168.123.103:0/246223142' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T20:21:49.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:49 vm03 bash[20708]: audit 2026-03-09T20:21:48.319454+0000 mon.a (mon.0) 431 : audit [DBG] from='client.? 192.168.123.103:0/246223142' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T20:21:49.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:49 vm04 bash[22793]: audit 2026-03-09T20:21:48.319454+0000 mon.a (mon.0) 431 : audit [DBG] from='client.? 192.168.123.103:0/246223142' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T20:21:49.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:49 vm04 bash[22793]: audit 2026-03-09T20:21:48.319454+0000 mon.a (mon.0) 431 : audit [DBG] from='client.? 192.168.123.103:0/246223142' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T20:21:49.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:49 vm08 bash[23232]: audit 2026-03-09T20:21:48.319454+0000 mon.a (mon.0) 431 : audit [DBG] from='client.? 192.168.123.103:0/246223142' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T20:21:49.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:49 vm08 bash[23232]: audit 2026-03-09T20:21:48.319454+0000 mon.a (mon.0) 431 : audit [DBG] from='client.? 192.168.123.103:0/246223142' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json"}]: dispatch 2026-03-09T20:21:50.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:50 vm04 bash[22793]: cluster 2026-03-09T20:21:49.222756+0000 mgr.a (mgr.14150) 145 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:50.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:50 vm04 bash[22793]: cluster 2026-03-09T20:21:49.222756+0000 mgr.a (mgr.14150) 145 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:50.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:50 vm03 bash[20708]: cluster 2026-03-09T20:21:49.222756+0000 mgr.a (mgr.14150) 145 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:50.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:50 vm03 bash[20708]: cluster 2026-03-09T20:21:49.222756+0000 mgr.a (mgr.14150) 145 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:50.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:50 vm08 bash[23232]: cluster 2026-03-09T20:21:49.222756+0000 mgr.a (mgr.14150) 145 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:50.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:50 vm08 bash[23232]: cluster 2026-03-09T20:21:49.222756+0000 mgr.a (mgr.14150) 145 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:52.074 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:21:52.298 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:21:52.298 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":21,"fsid":"f72c9476-1bf4-11f1-9f3a-7162c3a72a6d","created":"2026-03-09T20:18:31.513612+0000","modified":"2026-03-09T20:21:34.144639+0000","last_up_change":"2026-03-09T20:21:31.129429+0000","last_in_change":"2026-03-09T20:21:14.133653+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T20:21:31.255297+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"fe2e9dff-b6c3-47c6-b589-1294f3dee050","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":1560508613},{"type":"v1","addr":"192.168.123.103:6803","nonce":1560508613}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":1560508613},{"type":"v1","addr":"192.168.123.103:6805","nonce":1560508613}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":1560508613},{"type":"v1","addr":"192.168.123.103:6809","nonce":1560508613}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":1560508613},{"type":"v1","addr":"192.168.123.103:6807","nonce":1560508613}]},"public_addr":"192.168.123.103:6803/1560508613","cluster_addr":"192.168.123.103:6805/1560508613","heartbeat_back_addr":"192.168.123.103:6809/1560508613","heartbeat_front_addr":"192.168.123.103:6807/1560508613","state":["exists","up"]},{"osd":1,"uuid":"3eb69c4e-b9de-4a57-b23c-633c67090f8d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":19,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":381841990},{"type":"v1","addr":"192.168.123.104:6801","nonce":381841990}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":381841990},{"type":"v1","addr":"192.168.123.104:6803","nonce":381841990}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":381841990},{"type":"v1","addr":"192.168.123.104:6807","nonce":381841990}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":381841990},{"type":"v1","addr":"192.168.123.104:6805","nonce":381841990}]},"public_addr":"192.168.123.104:6801/381841990","cluster_addr":"192.168.123.104:6803/381841990","heartbeat_back_addr":"192.168.123.104:6807/381841990","heartbeat_front_addr":"192.168.123.104:6805/381841990","state":["exists","up"]},{"osd":2,"uuid":"e82b252d-637a-40ce-858c-11bb0bf30bdc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":2180466608},{"type":"v1","addr":"192.168.123.108:6801","nonce":2180466608}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":2180466608},{"type":"v1","addr":"192.168.123.108:6803","nonce":2180466608}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":2180466608},{"type":"v1","addr":"192.168.123.108:6807","nonce":2180466608}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":2180466608},{"type":"v1","addr":"192.168.123.108:6805","nonce":2180466608}]},"public_addr":"192.168.123.108:6801/2180466608","cluster_addr":"192.168.123.108:6803/2180466608","heartbeat_back_addr":"192.168.123.108:6807/2180466608","heartbeat_front_addr":"192.168.123.108:6805/2180466608","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:20:25.664289+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:20:58.549041+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:21:28.599858+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.103:0/2895496693":"2026-03-10T20:18:51.194853+0000","192.168.123.103:0/914222914":"2026-03-10T20:18:51.194853+0000","192.168.123.103:6800/3559043649":"2026-03-10T20:18:51.194853+0000","192.168.123.103:6801/3119412222":"2026-03-10T20:18:41.960547+0000","192.168.123.103:6800/3119412222":"2026-03-10T20:18:41.960547+0000","192.168.123.103:0/3438371388":"2026-03-10T20:18:41.960547+0000","192.168.123.103:0/549070997":"2026-03-10T20:18:51.194853+0000","192.168.123.103:6801/3559043649":"2026-03-10T20:18:51.194853+0000","192.168.123.103:0/3561801670":"2026-03-10T20:18:41.960547+0000","192.168.123.103:0/3652081460":"2026-03-10T20:18:41.960547+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T20:21:52.341 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:52 vm03 bash[20708]: cluster 2026-03-09T20:21:51.222979+0000 mgr.a (mgr.14150) 146 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:52.341 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:52 vm03 bash[20708]: cluster 2026-03-09T20:21:51.222979+0000 mgr.a (mgr.14150) 146 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:52.341 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:52 vm03 bash[20708]: audit 2026-03-09T20:21:52.297699+0000 mon.a (mon.0) 432 : audit [DBG] from='client.? 192.168.123.103:0/3019768619' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:52.341 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:52 vm03 bash[20708]: audit 2026-03-09T20:21:52.297699+0000 mon.a (mon.0) 432 : audit [DBG] from='client.? 192.168.123.103:0/3019768619' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:52.368 INFO:tasks.cephadm.ceph_manager.ceph:all up! 2026-03-09T20:21:52.369 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph osd dump --format=json 2026-03-09T20:21:52.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:52 vm04 bash[22793]: cluster 2026-03-09T20:21:51.222979+0000 mgr.a (mgr.14150) 146 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:52.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:52 vm04 bash[22793]: cluster 2026-03-09T20:21:51.222979+0000 mgr.a (mgr.14150) 146 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:52.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:52 vm04 bash[22793]: audit 2026-03-09T20:21:52.297699+0000 mon.a (mon.0) 432 : audit [DBG] from='client.? 192.168.123.103:0/3019768619' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:52.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:52 vm04 bash[22793]: audit 2026-03-09T20:21:52.297699+0000 mon.a (mon.0) 432 : audit [DBG] from='client.? 192.168.123.103:0/3019768619' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:52.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:52 vm08 bash[23232]: cluster 2026-03-09T20:21:51.222979+0000 mgr.a (mgr.14150) 146 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:52.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:52 vm08 bash[23232]: cluster 2026-03-09T20:21:51.222979+0000 mgr.a (mgr.14150) 146 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:52.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:52 vm08 bash[23232]: audit 2026-03-09T20:21:52.297699+0000 mon.a (mon.0) 432 : audit [DBG] from='client.? 192.168.123.103:0/3019768619' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:52.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:52 vm08 bash[23232]: audit 2026-03-09T20:21:52.297699+0000 mon.a (mon.0) 432 : audit [DBG] from='client.? 192.168.123.103:0/3019768619' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:54.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:54 vm04 bash[22793]: cluster 2026-03-09T20:21:53.223291+0000 mgr.a (mgr.14150) 147 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:54.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:54 vm04 bash[22793]: cluster 2026-03-09T20:21:53.223291+0000 mgr.a (mgr.14150) 147 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:54 vm03 bash[20708]: cluster 2026-03-09T20:21:53.223291+0000 mgr.a (mgr.14150) 147 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:54.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:54 vm03 bash[20708]: cluster 2026-03-09T20:21:53.223291+0000 mgr.a (mgr.14150) 147 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:54.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:54 vm08 bash[23232]: cluster 2026-03-09T20:21:53.223291+0000 mgr.a (mgr.14150) 147 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:54.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:54 vm08 bash[23232]: cluster 2026-03-09T20:21:53.223291+0000 mgr.a (mgr.14150) 147 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:56.086 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:21:56.338 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:21:56.338 INFO:teuthology.orchestra.run.vm03.stdout:{"epoch":21,"fsid":"f72c9476-1bf4-11f1-9f3a-7162c3a72a6d","created":"2026-03-09T20:18:31.513612+0000","modified":"2026-03-09T20:21:34.144639+0000","last_up_change":"2026-03-09T20:21:31.129429+0000","last_in_change":"2026-03-09T20:21:14.133653+0000","flags":"sortbitwise,recovery_deletes,purged_snapdirs,pglog_hardlimit","flags_num":5799936,"flags_set":["pglog_hardlimit","purged_snapdirs","recovery_deletes","sortbitwise"],"crush_version":8,"full_ratio":0.94999998807907104,"backfillfull_ratio":0.89999997615814209,"nearfull_ratio":0.85000002384185791,"cluster_snapshot":"","pool_max":1,"max_osd":3,"require_min_compat_client":"luminous","min_compat_client":"jewel","require_osd_release":"squid","allow_crimson":false,"pools":[{"pool":1,"pool_name":".mgr","create_time":"2026-03-09T20:21:31.255297+0000","flags":1,"flags_names":"hashpspool","type":1,"size":3,"min_size":2,"crush_rule":0,"peering_crush_bucket_count":0,"peering_crush_bucket_target":0,"peering_crush_bucket_barrier":0,"peering_crush_bucket_mandatory_member":2147483647,"is_stretch_pool":false,"object_hash":2,"pg_autoscale_mode":"off","pg_num":1,"pg_placement_num":1,"pg_placement_num_target":1,"pg_num_target":1,"pg_num_pending":1,"last_pg_merge_meta":{"source_pgid":"0.0","ready_epoch":0,"last_epoch_started":0,"last_epoch_clean":0,"source_version":"0'0","target_version":"0'0"},"last_change":"21","last_force_op_resend":"0","last_force_op_resend_prenautilus":"0","last_force_op_resend_preluminous":"0","auid":0,"snap_mode":"selfmanaged","snap_seq":0,"snap_epoch":0,"pool_snaps":[],"removed_snaps":"[]","quota_max_bytes":0,"quota_max_objects":0,"tiers":[],"tier_of":-1,"read_tier":-1,"write_tier":-1,"cache_mode":"none","target_max_bytes":0,"target_max_objects":0,"cache_target_dirty_ratio_micro":400000,"cache_target_dirty_high_ratio_micro":600000,"cache_target_full_ratio_micro":800000,"cache_min_flush_age":0,"cache_min_evict_age":0,"erasure_code_profile":"","hit_set_params":{"type":"none"},"hit_set_period":0,"hit_set_count":0,"use_gmt_hitset":true,"min_read_recency_for_promote":0,"min_write_recency_for_promote":0,"hit_set_grade_decay_rate":0,"hit_set_search_last_n":0,"grade_table":[],"stripe_width":0,"expected_num_objects":0,"fast_read":false,"options":{"pg_num_max":32,"pg_num_min":1},"application_metadata":{"mgr":{}},"read_balance":{"score_type":"Fair distribution","score_acting":3,"score_stable":3,"optimal_score":1,"raw_score_acting":3,"raw_score_stable":3,"primary_affinity_weighted":1,"average_primary_affinity":1,"average_primary_affinity_weighted":1}}],"osds":[{"osd":0,"uuid":"fe2e9dff-b6c3-47c6-b589-1294f3dee050","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":8,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6802","nonce":1560508613},{"type":"v1","addr":"192.168.123.103:6803","nonce":1560508613}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6804","nonce":1560508613},{"type":"v1","addr":"192.168.123.103:6805","nonce":1560508613}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6808","nonce":1560508613},{"type":"v1","addr":"192.168.123.103:6809","nonce":1560508613}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.103:6806","nonce":1560508613},{"type":"v1","addr":"192.168.123.103:6807","nonce":1560508613}]},"public_addr":"192.168.123.103:6803/1560508613","cluster_addr":"192.168.123.103:6805/1560508613","heartbeat_back_addr":"192.168.123.103:6809/1560508613","heartbeat_front_addr":"192.168.123.103:6807/1560508613","state":["exists","up"]},{"osd":1,"uuid":"3eb69c4e-b9de-4a57-b23c-633c67090f8d","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":13,"up_thru":19,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6800","nonce":381841990},{"type":"v1","addr":"192.168.123.104:6801","nonce":381841990}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6802","nonce":381841990},{"type":"v1","addr":"192.168.123.104:6803","nonce":381841990}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6806","nonce":381841990},{"type":"v1","addr":"192.168.123.104:6807","nonce":381841990}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.104:6804","nonce":381841990},{"type":"v1","addr":"192.168.123.104:6805","nonce":381841990}]},"public_addr":"192.168.123.104:6801/381841990","cluster_addr":"192.168.123.104:6803/381841990","heartbeat_back_addr":"192.168.123.104:6807/381841990","heartbeat_front_addr":"192.168.123.104:6805/381841990","state":["exists","up"]},{"osd":2,"uuid":"e82b252d-637a-40ce-858c-11bb0bf30bdc","up":1,"in":1,"weight":1,"primary_affinity":1,"last_clean_begin":0,"last_clean_end":0,"up_from":18,"up_thru":0,"down_at":0,"lost_at":0,"public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6800","nonce":2180466608},{"type":"v1","addr":"192.168.123.108:6801","nonce":2180466608}]},"cluster_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6802","nonce":2180466608},{"type":"v1","addr":"192.168.123.108:6803","nonce":2180466608}]},"heartbeat_back_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6806","nonce":2180466608},{"type":"v1","addr":"192.168.123.108:6807","nonce":2180466608}]},"heartbeat_front_addrs":{"addrvec":[{"type":"v2","addr":"192.168.123.108:6804","nonce":2180466608},{"type":"v1","addr":"192.168.123.108:6805","nonce":2180466608}]},"public_addr":"192.168.123.108:6801/2180466608","cluster_addr":"192.168.123.108:6803/2180466608","heartbeat_back_addr":"192.168.123.108:6807/2180466608","heartbeat_front_addr":"192.168.123.108:6805/2180466608","state":["exists","up"]}],"osd_xinfo":[{"osd":0,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:20:25.664289+0000","dead_epoch":0},{"osd":1,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:20:58.549041+0000","dead_epoch":0},{"osd":2,"down_stamp":"0.000000","laggy_probability":0,"laggy_interval":0,"features":4540701547738038271,"old_weight":0,"last_purged_snaps_scrub":"2026-03-09T20:21:28.599858+0000","dead_epoch":0}],"pg_upmap":[],"pg_upmap_items":[],"pg_upmap_primaries":[],"pg_temp":[],"primary_temp":[],"blocklist":{"192.168.123.103:0/2895496693":"2026-03-10T20:18:51.194853+0000","192.168.123.103:0/914222914":"2026-03-10T20:18:51.194853+0000","192.168.123.103:6800/3559043649":"2026-03-10T20:18:51.194853+0000","192.168.123.103:6801/3119412222":"2026-03-10T20:18:41.960547+0000","192.168.123.103:6800/3119412222":"2026-03-10T20:18:41.960547+0000","192.168.123.103:0/3438371388":"2026-03-10T20:18:41.960547+0000","192.168.123.103:0/549070997":"2026-03-10T20:18:51.194853+0000","192.168.123.103:6801/3559043649":"2026-03-10T20:18:51.194853+0000","192.168.123.103:0/3561801670":"2026-03-10T20:18:41.960547+0000","192.168.123.103:0/3652081460":"2026-03-10T20:18:41.960547+0000"},"range_blocklist":{},"erasure_code_profiles":{"default":{"crush-failure-domain":"osd","k":"2","m":"1","plugin":"jerasure","technique":"reed_sol_van"}},"removed_snaps_queue":[],"new_removed_snaps":[],"new_purged_snaps":[],"crush_node_flags":{},"device_class_flags":{},"stretch_mode":{"stretch_mode_enabled":false,"stretch_bucket_count":0,"degraded_stretch_mode":0,"recovering_stretch_mode":0,"stretch_mode_bucket":0}} 2026-03-09T20:21:56.399 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:56 vm03 bash[20708]: cluster 2026-03-09T20:21:55.223603+0000 mgr.a (mgr.14150) 148 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:56.399 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:56 vm03 bash[20708]: cluster 2026-03-09T20:21:55.223603+0000 mgr.a (mgr.14150) 148 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:56.399 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph tell osd.0 flush_pg_stats 2026-03-09T20:21:56.399 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph tell osd.1 flush_pg_stats 2026-03-09T20:21:56.399 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph tell osd.2 flush_pg_stats 2026-03-09T20:21:56.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:56 vm04 bash[22793]: cluster 2026-03-09T20:21:55.223603+0000 mgr.a (mgr.14150) 148 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:56.628 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:56 vm04 bash[22793]: cluster 2026-03-09T20:21:55.223603+0000 mgr.a (mgr.14150) 148 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:56.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:56 vm08 bash[23232]: cluster 2026-03-09T20:21:55.223603+0000 mgr.a (mgr.14150) 148 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:56.837 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:56 vm08 bash[23232]: cluster 2026-03-09T20:21:55.223603+0000 mgr.a (mgr.14150) 148 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:57.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:57 vm04 bash[22793]: audit 2026-03-09T20:21:56.337628+0000 mon.a (mon.0) 433 : audit [DBG] from='client.? 192.168.123.103:0/3098662415' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:57.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:57 vm04 bash[22793]: audit 2026-03-09T20:21:56.337628+0000 mon.a (mon.0) 433 : audit [DBG] from='client.? 192.168.123.103:0/3098662415' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:57.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:57 vm03 bash[20708]: audit 2026-03-09T20:21:56.337628+0000 mon.a (mon.0) 433 : audit [DBG] from='client.? 192.168.123.103:0/3098662415' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:57.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:57 vm03 bash[20708]: audit 2026-03-09T20:21:56.337628+0000 mon.a (mon.0) 433 : audit [DBG] from='client.? 192.168.123.103:0/3098662415' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:57.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:57 vm08 bash[23232]: audit 2026-03-09T20:21:56.337628+0000 mon.a (mon.0) 433 : audit [DBG] from='client.? 192.168.123.103:0/3098662415' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:57.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:57 vm08 bash[23232]: audit 2026-03-09T20:21:56.337628+0000 mon.a (mon.0) 433 : audit [DBG] from='client.? 192.168.123.103:0/3098662415' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json"}]: dispatch 2026-03-09T20:21:58.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:58 vm04 bash[22793]: cluster 2026-03-09T20:21:57.223940+0000 mgr.a (mgr.14150) 149 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:58.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:21:58 vm04 bash[22793]: cluster 2026-03-09T20:21:57.223940+0000 mgr.a (mgr.14150) 149 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:58.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:58 vm03 bash[20708]: cluster 2026-03-09T20:21:57.223940+0000 mgr.a (mgr.14150) 149 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:58.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:21:58 vm03 bash[20708]: cluster 2026-03-09T20:21:57.223940+0000 mgr.a (mgr.14150) 149 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:58.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:58 vm08 bash[23232]: cluster 2026-03-09T20:21:57.223940+0000 mgr.a (mgr.14150) 149 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:21:58.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:21:58 vm08 bash[23232]: cluster 2026-03-09T20:21:57.223940+0000 mgr.a (mgr.14150) 149 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:00.101 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:22:00.101 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:22:00.103 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:22:00.376 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:00 vm03 bash[20708]: cluster 2026-03-09T20:21:59.224159+0000 mgr.a (mgr.14150) 150 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:00.376 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:00 vm03 bash[20708]: cluster 2026-03-09T20:21:59.224159+0000 mgr.a (mgr.14150) 150 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:00.412 INFO:teuthology.orchestra.run.vm03.stdout:77309411336 2026-03-09T20:22:00.412 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph osd last-stat-seq osd.2 2026-03-09T20:22:00.438 INFO:teuthology.orchestra.run.vm03.stdout:55834574861 2026-03-09T20:22:00.438 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph osd last-stat-seq osd.1 2026-03-09T20:22:00.459 INFO:teuthology.orchestra.run.vm03.stdout:34359738388 2026-03-09T20:22:00.459 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph osd last-stat-seq osd.0 2026-03-09T20:22:00.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:00 vm04 bash[22793]: cluster 2026-03-09T20:21:59.224159+0000 mgr.a (mgr.14150) 150 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:00.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:00 vm04 bash[22793]: cluster 2026-03-09T20:21:59.224159+0000 mgr.a (mgr.14150) 150 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:00.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:00 vm08 bash[23232]: cluster 2026-03-09T20:21:59.224159+0000 mgr.a (mgr.14150) 150 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:00.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:00 vm08 bash[23232]: cluster 2026-03-09T20:21:59.224159+0000 mgr.a (mgr.14150) 150 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:02.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:02 vm04 bash[22793]: cluster 2026-03-09T20:22:01.224366+0000 mgr.a (mgr.14150) 151 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:02.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:02 vm04 bash[22793]: cluster 2026-03-09T20:22:01.224366+0000 mgr.a (mgr.14150) 151 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:02.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:02 vm03 bash[20708]: cluster 2026-03-09T20:22:01.224366+0000 mgr.a (mgr.14150) 151 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:02.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:02 vm03 bash[20708]: cluster 2026-03-09T20:22:01.224366+0000 mgr.a (mgr.14150) 151 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:02.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:02 vm08 bash[23232]: cluster 2026-03-09T20:22:01.224366+0000 mgr.a (mgr.14150) 151 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:02.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:02 vm08 bash[23232]: cluster 2026-03-09T20:22:01.224366+0000 mgr.a (mgr.14150) 151 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:04.113 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:22:04.379 INFO:teuthology.orchestra.run.vm03.stdout:77309411337 2026-03-09T20:22:04.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:04 vm03 bash[20708]: cluster 2026-03-09T20:22:03.224700+0000 mgr.a (mgr.14150) 152 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:04.388 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:04 vm03 bash[20708]: cluster 2026-03-09T20:22:03.224700+0000 mgr.a (mgr.14150) 152 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:04.425 INFO:tasks.cephadm.ceph_manager.ceph:need seq 77309411336 got 77309411337 for osd.2 2026-03-09T20:22:04.426 DEBUG:teuthology.parallel:result is None 2026-03-09T20:22:04.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:04 vm04 bash[22793]: cluster 2026-03-09T20:22:03.224700+0000 mgr.a (mgr.14150) 152 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:04.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:04 vm04 bash[22793]: cluster 2026-03-09T20:22:03.224700+0000 mgr.a (mgr.14150) 152 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:04.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:04 vm08 bash[23232]: cluster 2026-03-09T20:22:03.224700+0000 mgr.a (mgr.14150) 152 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:04.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:04 vm08 bash[23232]: cluster 2026-03-09T20:22:03.224700+0000 mgr.a (mgr.14150) 152 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:05.116 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:22:05.117 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:22:05.388 INFO:teuthology.orchestra.run.vm03.stdout:34359738389 2026-03-09T20:22:05.396 INFO:teuthology.orchestra.run.vm03.stdout:55834574862 2026-03-09T20:22:05.405 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:05 vm03 bash[20708]: audit 2026-03-09T20:22:04.378865+0000 mon.a (mon.0) 434 : audit [DBG] from='client.? 192.168.123.103:0/620785692' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T20:22:05.405 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:05 vm03 bash[20708]: audit 2026-03-09T20:22:04.378865+0000 mon.a (mon.0) 434 : audit [DBG] from='client.? 192.168.123.103:0/620785692' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T20:22:05.444 INFO:tasks.cephadm.ceph_manager.ceph:need seq 34359738388 got 34359738389 for osd.0 2026-03-09T20:22:05.444 DEBUG:teuthology.parallel:result is None 2026-03-09T20:22:05.471 INFO:tasks.cephadm.ceph_manager.ceph:need seq 55834574861 got 55834574862 for osd.1 2026-03-09T20:22:05.471 DEBUG:teuthology.parallel:result is None 2026-03-09T20:22:05.471 INFO:tasks.cephadm.ceph_manager.ceph:waiting for clean 2026-03-09T20:22:05.471 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph pg dump --format=json 2026-03-09T20:22:05.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:05 vm04 bash[22793]: audit 2026-03-09T20:22:04.378865+0000 mon.a (mon.0) 434 : audit [DBG] from='client.? 192.168.123.103:0/620785692' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T20:22:05.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:05 vm04 bash[22793]: audit 2026-03-09T20:22:04.378865+0000 mon.a (mon.0) 434 : audit [DBG] from='client.? 192.168.123.103:0/620785692' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T20:22:05.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:05 vm08 bash[23232]: audit 2026-03-09T20:22:04.378865+0000 mon.a (mon.0) 434 : audit [DBG] from='client.? 192.168.123.103:0/620785692' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T20:22:05.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:05 vm08 bash[23232]: audit 2026-03-09T20:22:04.378865+0000 mon.a (mon.0) 434 : audit [DBG] from='client.? 192.168.123.103:0/620785692' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 2}]: dispatch 2026-03-09T20:22:06.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:06 vm03 bash[20708]: cluster 2026-03-09T20:22:05.224987+0000 mgr.a (mgr.14150) 153 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:06.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:06 vm03 bash[20708]: cluster 2026-03-09T20:22:05.224987+0000 mgr.a (mgr.14150) 153 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:06.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:06 vm03 bash[20708]: audit 2026-03-09T20:22:05.390297+0000 mon.c (mon.1) 14 : audit [DBG] from='client.? 192.168.123.103:0/2819684769' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T20:22:06.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:06 vm03 bash[20708]: audit 2026-03-09T20:22:05.390297+0000 mon.c (mon.1) 14 : audit [DBG] from='client.? 192.168.123.103:0/2819684769' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T20:22:06.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:06 vm03 bash[20708]: audit 2026-03-09T20:22:05.396589+0000 mon.a (mon.0) 435 : audit [DBG] from='client.? 192.168.123.103:0/2910223165' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T20:22:06.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:06 vm03 bash[20708]: audit 2026-03-09T20:22:05.396589+0000 mon.a (mon.0) 435 : audit [DBG] from='client.? 192.168.123.103:0/2910223165' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T20:22:06.805 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:06 vm08 bash[23232]: cluster 2026-03-09T20:22:05.224987+0000 mgr.a (mgr.14150) 153 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:06.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:06 vm08 bash[23232]: cluster 2026-03-09T20:22:05.224987+0000 mgr.a (mgr.14150) 153 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:06.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:06 vm08 bash[23232]: audit 2026-03-09T20:22:05.390297+0000 mon.c (mon.1) 14 : audit [DBG] from='client.? 192.168.123.103:0/2819684769' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T20:22:06.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:06 vm08 bash[23232]: audit 2026-03-09T20:22:05.390297+0000 mon.c (mon.1) 14 : audit [DBG] from='client.? 192.168.123.103:0/2819684769' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T20:22:06.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:06 vm08 bash[23232]: audit 2026-03-09T20:22:05.396589+0000 mon.a (mon.0) 435 : audit [DBG] from='client.? 192.168.123.103:0/2910223165' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T20:22:06.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:06 vm08 bash[23232]: audit 2026-03-09T20:22:05.396589+0000 mon.a (mon.0) 435 : audit [DBG] from='client.? 192.168.123.103:0/2910223165' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T20:22:06.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:06 vm04 bash[22793]: cluster 2026-03-09T20:22:05.224987+0000 mgr.a (mgr.14150) 153 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:06.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:06 vm04 bash[22793]: cluster 2026-03-09T20:22:05.224987+0000 mgr.a (mgr.14150) 153 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:06.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:06 vm04 bash[22793]: audit 2026-03-09T20:22:05.390297+0000 mon.c (mon.1) 14 : audit [DBG] from='client.? 192.168.123.103:0/2819684769' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T20:22:06.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:06 vm04 bash[22793]: audit 2026-03-09T20:22:05.390297+0000 mon.c (mon.1) 14 : audit [DBG] from='client.? 192.168.123.103:0/2819684769' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 0}]: dispatch 2026-03-09T20:22:06.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:06 vm04 bash[22793]: audit 2026-03-09T20:22:05.396589+0000 mon.a (mon.0) 435 : audit [DBG] from='client.? 192.168.123.103:0/2910223165' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T20:22:06.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:06 vm04 bash[22793]: audit 2026-03-09T20:22:05.396589+0000 mon.a (mon.0) 435 : audit [DBG] from='client.? 192.168.123.103:0/2910223165' entity='client.admin' cmd=[{"prefix": "osd last-stat-seq", "id": 1}]: dispatch 2026-03-09T20:22:08.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:08 vm03 bash[20708]: cluster 2026-03-09T20:22:07.225271+0000 mgr.a (mgr.14150) 154 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:08.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:08 vm03 bash[20708]: cluster 2026-03-09T20:22:07.225271+0000 mgr.a (mgr.14150) 154 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:08.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:08 vm08 bash[23232]: cluster 2026-03-09T20:22:07.225271+0000 mgr.a (mgr.14150) 154 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:08.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:08 vm08 bash[23232]: cluster 2026-03-09T20:22:07.225271+0000 mgr.a (mgr.14150) 154 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:08.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:08 vm04 bash[22793]: cluster 2026-03-09T20:22:07.225271+0000 mgr.a (mgr.14150) 154 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:08.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:08 vm04 bash[22793]: cluster 2026-03-09T20:22:07.225271+0000 mgr.a (mgr.14150) 154 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:09.127 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:22:09.367 INFO:teuthology.orchestra.run.vm03.stderr:dumped all 2026-03-09T20:22:09.367 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:22:09.423 INFO:teuthology.orchestra.run.vm03.stdout:{"pg_ready":true,"pg_map":{"version":110,"stamp":"2026-03-09T20:22:09.225486+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":492412,"kb_used_data":1884,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62409860,"statfs":{"total":64411926528,"available":63907696640,"internally_reserved":0,"allocated":1929216,"data_stored":1541172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4770,"internal_metadata":82373982},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001684"},"pg_stats":[{"pgid":"1.0","version":"20'32","reported_seq":57,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-09T20:21:34.156276+0000","last_change":"2026-03-09T20:21:33.146084+0000","last_active":"2026-03-09T20:21:34.156276+0000","last_peered":"2026-03-09T20:21:34.156276+0000","last_clean":"2026-03-09T20:21:34.156276+0000","last_became_active":"2026-03-09T20:21:33.145914+0000","last_became_peered":"2026-03-09T20:21:33.145914+0000","last_unstale":"2026-03-09T20:21:34.156276+0000","last_undegraded":"2026-03-09T20:21:34.156276+0000","last_fullsized":"2026-03-09T20:21:34.156276+0000","mapping_epoch":19,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":20,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:21:32.133635+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:21:32.133635+0000","last_clean_scrub_stamp":"2026-03-09T20:21:32.133635+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:26:54.098433+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,0],"acting":[1,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":18,"seq":77309411338,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":437204,"kb_used_data":628,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530220,"statfs":{"total":21470642176,"available":21022945280,"internally_reserved":0,"allocated":643072,"data_stored":513724,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574863,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27604,"kb_used_data":628,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939820,"statfs":{"total":21470642176,"available":21442375680,"internally_reserved":0,"allocated":643072,"data_stored":513724,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738390,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27604,"kb_used_data":628,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939820,"statfs":{"total":21470642176,"available":21442375680,"internally_reserved":0,"allocated":643072,"data_stored":513724,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T20:22:09.423 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph pg dump --format=json 2026-03-09T20:22:10.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:10 vm03 bash[20708]: cluster 2026-03-09T20:22:09.225608+0000 mgr.a (mgr.14150) 155 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:10.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:10 vm03 bash[20708]: cluster 2026-03-09T20:22:09.225608+0000 mgr.a (mgr.14150) 155 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:10.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:10 vm03 bash[20708]: audit 2026-03-09T20:22:09.366969+0000 mgr.a (mgr.14150) 156 : audit [DBG] from='client.14364 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:22:10.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:10 vm03 bash[20708]: audit 2026-03-09T20:22:09.366969+0000 mgr.a (mgr.14150) 156 : audit [DBG] from='client.14364 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:22:10.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:10 vm08 bash[23232]: cluster 2026-03-09T20:22:09.225608+0000 mgr.a (mgr.14150) 155 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:10.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:10 vm08 bash[23232]: cluster 2026-03-09T20:22:09.225608+0000 mgr.a (mgr.14150) 155 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:10.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:10 vm08 bash[23232]: audit 2026-03-09T20:22:09.366969+0000 mgr.a (mgr.14150) 156 : audit [DBG] from='client.14364 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:22:10.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:10 vm08 bash[23232]: audit 2026-03-09T20:22:09.366969+0000 mgr.a (mgr.14150) 156 : audit [DBG] from='client.14364 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:22:10.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:10 vm04 bash[22793]: cluster 2026-03-09T20:22:09.225608+0000 mgr.a (mgr.14150) 155 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:10.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:10 vm04 bash[22793]: cluster 2026-03-09T20:22:09.225608+0000 mgr.a (mgr.14150) 155 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:10.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:10 vm04 bash[22793]: audit 2026-03-09T20:22:09.366969+0000 mgr.a (mgr.14150) 156 : audit [DBG] from='client.14364 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:22:10.870 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:10 vm04 bash[22793]: audit 2026-03-09T20:22:09.366969+0000 mgr.a (mgr.14150) 156 : audit [DBG] from='client.14364 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:22:11.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:11 vm08 bash[23232]: cluster 2026-03-09T20:22:11.225844+0000 mgr.a (mgr.14150) 157 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:11.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:11 vm08 bash[23232]: cluster 2026-03-09T20:22:11.225844+0000 mgr.a (mgr.14150) 157 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:11.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:11 vm04 bash[22793]: cluster 2026-03-09T20:22:11.225844+0000 mgr.a (mgr.14150) 157 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:11.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:11 vm04 bash[22793]: cluster 2026-03-09T20:22:11.225844+0000 mgr.a (mgr.14150) 157 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:11.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:11 vm03 bash[20708]: cluster 2026-03-09T20:22:11.225844+0000 mgr.a (mgr.14150) 157 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:11.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:11 vm03 bash[20708]: cluster 2026-03-09T20:22:11.225844+0000 mgr.a (mgr.14150) 157 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:13.138 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:22:13.380 INFO:teuthology.orchestra.run.vm03.stderr:dumped all 2026-03-09T20:22:13.381 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:22:13.456 INFO:teuthology.orchestra.run.vm03.stdout:{"pg_ready":true,"pg_map":{"version":112,"stamp":"2026-03-09T20:22:13.225951+0000","last_osdmap_epoch":0,"last_pg_scan":0,"pg_stats_sum":{"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":0},"osd_stats_sum":{"up_from":0,"seq":0,"num_pgs":3,"num_osds":3,"num_per_pool_osds":3,"num_per_pool_omap_osds":3,"kb":62902272,"kb_used":492412,"kb_used_data":1884,"kb_used_omap":4,"kb_used_meta":80443,"kb_avail":62409860,"statfs":{"total":64411926528,"available":63907696640,"internally_reserved":0,"allocated":1929216,"data_stored":1541172,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":4770,"internal_metadata":82373982},"hb_peers":[],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[],"network_ping_times":[]},"pg_stats_delta":{"stat_sum":{"num_bytes":0,"num_objects":0,"num_object_clones":0,"num_object_copies":0,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":0,"num_whiteouts":0,"num_read":0,"num_read_kb":0,"num_write":0,"num_write_kb":0,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":0,"data_stored":0,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":0,"ondisk_log_size":0,"up":0,"acting":0,"num_store_stats":0,"stamp_delta":"12.001680"},"pg_stats":[{"pgid":"1.0","version":"20'32","reported_seq":57,"reported_epoch":21,"state":"active+clean","last_fresh":"2026-03-09T20:21:34.156276+0000","last_change":"2026-03-09T20:21:33.146084+0000","last_active":"2026-03-09T20:21:34.156276+0000","last_peered":"2026-03-09T20:21:34.156276+0000","last_clean":"2026-03-09T20:21:34.156276+0000","last_became_active":"2026-03-09T20:21:33.145914+0000","last_became_peered":"2026-03-09T20:21:33.145914+0000","last_unstale":"2026-03-09T20:21:34.156276+0000","last_undegraded":"2026-03-09T20:21:34.156276+0000","last_fullsized":"2026-03-09T20:21:34.156276+0000","mapping_epoch":19,"log_start":"0'0","ondisk_log_start":"0'0","created":19,"last_epoch_clean":20,"parent":"0.0","parent_split_bits":0,"last_scrub":"0'0","last_scrub_stamp":"2026-03-09T20:21:32.133635+0000","last_deep_scrub":"0'0","last_deep_scrub_stamp":"2026-03-09T20:21:32.133635+0000","last_clean_scrub_stamp":"2026-03-09T20:21:32.133635+0000","objects_scrubbed":0,"log_size":32,"log_dups_size":0,"ondisk_log_size":32,"stats_invalid":false,"dirty_stats_invalid":false,"omap_stats_invalid":false,"hitset_stats_invalid":false,"hitset_bytes_stats_invalid":false,"pin_stats_invalid":false,"manifest_stats_invalid":false,"snaptrimq_len":0,"last_scrub_duration":0,"scrub_schedule":"periodic scrub scheduled @ 2026-03-11T03:26:54.098433+0000","scrub_duration":0,"objects_trimmed":0,"snaptrim_duration":0,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"up":[1,2,0],"acting":[1,2,0],"avail_no_missing":[],"object_location_counts":[],"blocked_by":[],"up_primary":1,"acting_primary":1,"purged_snaps":[]}],"pool_stats":[{"poolid":1,"num_pg":1,"stat_sum":{"num_bytes":459280,"num_objects":2,"num_object_clones":0,"num_object_copies":6,"num_objects_missing_on_primary":0,"num_objects_missing":0,"num_objects_degraded":0,"num_objects_misplaced":0,"num_objects_unfound":0,"num_objects_dirty":2,"num_whiteouts":0,"num_read":46,"num_read_kb":37,"num_write":57,"num_write_kb":584,"num_scrub_errors":0,"num_shallow_scrub_errors":0,"num_deep_scrub_errors":0,"num_objects_recovered":0,"num_bytes_recovered":0,"num_keys_recovered":0,"num_objects_omap":0,"num_objects_hit_set_archive":0,"num_bytes_hit_set_archive":0,"num_flush":0,"num_flush_kb":0,"num_evict":0,"num_evict_kb":0,"num_promote":0,"num_flush_mode_high":0,"num_flush_mode_low":0,"num_evict_mode_some":0,"num_evict_mode_full":0,"num_objects_pinned":0,"num_legacy_snapsets":0,"num_large_omap_objects":0,"num_objects_manifest":0,"num_omap_bytes":0,"num_omap_keys":0,"num_objects_repaired":0},"store_stats":{"total":0,"available":0,"internally_reserved":0,"allocated":1388544,"data_stored":1377840,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},"log_size":32,"ondisk_log_size":32,"up":3,"acting":3,"num_store_stats":3}],"osd_stats":[{"osd":2,"up_from":18,"seq":77309411339,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":437204,"kb_used_data":628,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20530220,"statfs":{"total":21470642176,"available":21022945280,"internally_reserved":0,"allocated":643072,"data_stored":513724,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,1],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":1,"up_from":13,"seq":55834574864,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27604,"kb_used_data":628,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939820,"statfs":{"total":21470642176,"available":21442375680,"internally_reserved":0,"allocated":643072,"data_stored":513724,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[0,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]},{"osd":0,"up_from":8,"seq":34359738390,"num_pgs":1,"num_osds":1,"num_per_pool_osds":1,"num_per_pool_omap_osds":1,"kb":20967424,"kb_used":27604,"kb_used_data":628,"kb_used_omap":1,"kb_used_meta":26814,"kb_avail":20939820,"statfs":{"total":21470642176,"available":21442375680,"internally_reserved":0,"allocated":643072,"data_stored":513724,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":1590,"internal_metadata":27457994},"hb_peers":[1,2],"snap_trim_queue_len":0,"num_snap_trimming":0,"num_shards_repaired":0,"op_queue_age_hist":{"histogram":[],"upper_bound":1},"perf_stat":{"commit_latency_ms":0,"apply_latency_ms":0,"commit_latency_ns":0,"apply_latency_ns":0},"alerts":[]}],"pool_statfs":[{"poolid":1,"osd":0,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":1,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0},{"poolid":1,"osd":2,"total":0,"available":0,"internally_reserved":0,"allocated":462848,"data_stored":459280,"data_compressed":0,"data_compressed_allocated":0,"data_compressed_original":0,"omap_allocated":0,"internal_metadata":0}]}} 2026-03-09T20:22:13.456 INFO:tasks.cephadm.ceph_manager.ceph:clean! 2026-03-09T20:22:13.456 INFO:tasks.ceph:Waiting until ceph cluster ceph is healthy... 2026-03-09T20:22:13.457 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy 2026-03-09T20:22:13.457 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph health --format=json 2026-03-09T20:22:14.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:14 vm03 bash[20708]: cluster 2026-03-09T20:22:13.226087+0000 mgr.a (mgr.14150) 158 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:14.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:14 vm03 bash[20708]: cluster 2026-03-09T20:22:13.226087+0000 mgr.a (mgr.14150) 158 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:14.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:14 vm03 bash[20708]: audit 2026-03-09T20:22:13.380688+0000 mgr.a (mgr.14150) 159 : audit [DBG] from='client.14370 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:22:14.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:14 vm03 bash[20708]: audit 2026-03-09T20:22:13.380688+0000 mgr.a (mgr.14150) 159 : audit [DBG] from='client.14370 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:22:14.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:14 vm08 bash[23232]: cluster 2026-03-09T20:22:13.226087+0000 mgr.a (mgr.14150) 158 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:14.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:14 vm08 bash[23232]: cluster 2026-03-09T20:22:13.226087+0000 mgr.a (mgr.14150) 158 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:14.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:14 vm08 bash[23232]: audit 2026-03-09T20:22:13.380688+0000 mgr.a (mgr.14150) 159 : audit [DBG] from='client.14370 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:22:14.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:14 vm08 bash[23232]: audit 2026-03-09T20:22:13.380688+0000 mgr.a (mgr.14150) 159 : audit [DBG] from='client.14370 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:22:14.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:14 vm04 bash[22793]: cluster 2026-03-09T20:22:13.226087+0000 mgr.a (mgr.14150) 158 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:14.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:14 vm04 bash[22793]: cluster 2026-03-09T20:22:13.226087+0000 mgr.a (mgr.14150) 158 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:14.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:14 vm04 bash[22793]: audit 2026-03-09T20:22:13.380688+0000 mgr.a (mgr.14150) 159 : audit [DBG] from='client.14370 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:22:14.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:14 vm04 bash[22793]: audit 2026-03-09T20:22:13.380688+0000 mgr.a (mgr.14150) 159 : audit [DBG] from='client.14370 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:22:16.555 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:16 vm08 bash[23232]: cluster 2026-03-09T20:22:15.226366+0000 mgr.a (mgr.14150) 160 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:16.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:16 vm08 bash[23232]: cluster 2026-03-09T20:22:15.226366+0000 mgr.a (mgr.14150) 160 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:16.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:16 vm04 bash[22793]: cluster 2026-03-09T20:22:15.226366+0000 mgr.a (mgr.14150) 160 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:16.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:16 vm04 bash[22793]: cluster 2026-03-09T20:22:15.226366+0000 mgr.a (mgr.14150) 160 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:16.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:16 vm03 bash[20708]: cluster 2026-03-09T20:22:15.226366+0000 mgr.a (mgr.14150) 160 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:16.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:16 vm03 bash[20708]: cluster 2026-03-09T20:22:15.226366+0000 mgr.a (mgr.14150) 160 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:17.149 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:22:17.417 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:22:17.418 INFO:teuthology.orchestra.run.vm03.stdout:{"status":"HEALTH_OK","checks":{},"mutes":[]} 2026-03-09T20:22:17.468 INFO:tasks.cephadm.ceph_manager.ceph:wait_until_healthy done 2026-03-09T20:22:17.468 INFO:tasks.cephadm:Setup complete, yielding 2026-03-09T20:22:17.468 INFO:teuthology.run_tasks:Running task cephadm.shell... 2026-03-09T20:22:17.470 INFO:tasks.cephadm:Running commands on role host.a host ubuntu@vm03.local 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- bash -c 'set -e 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> set -x 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> ceph orch apply node-exporter 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> ceph orch apply grafana 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> ceph orch apply alertmanager 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> ceph orch apply prometheus 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> sleep 240 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> ceph orch ls 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> ceph orch ps 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> ceph orch host ls 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> MON_DAEMON=$(ceph orch ps --daemon-type mon -f json | jq -r '"'"'last | .daemon_name'"'"') 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> GRAFANA_HOST=$(ceph orch ps --daemon-type grafana -f json | jq -e '"'"'.[]'"'"' | jq -r '"'"'.hostname'"'"') 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> PROM_HOST=$(ceph orch ps --daemon-type prometheus -f json | jq -e '"'"'.[]'"'"' | jq -r '"'"'.hostname'"'"') 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> ALERTM_HOST=$(ceph orch ps --daemon-type alertmanager -f json | jq -e '"'"'.[]'"'"' | jq -r '"'"'.hostname'"'"') 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> GRAFANA_IP=$(ceph orch host ls -f json | jq -r --arg GRAFANA_HOST "$GRAFANA_HOST" '"'"'.[] | select(.hostname==$GRAFANA_HOST) | .addr'"'"') 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> PROM_IP=$(ceph orch host ls -f json | jq -r --arg PROM_HOST "$PROM_HOST" '"'"'.[] | select(.hostname==$PROM_HOST) | .addr'"'"') 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> ALERTM_IP=$(ceph orch host ls -f json | jq -r --arg ALERTM_HOST "$ALERTM_HOST" '"'"'.[] | select(.hostname==$ALERTM_HOST) | .addr'"'"') 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> # check each host node-exporter metrics endpoint is responsive 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> ALL_HOST_IPS=$(ceph orch host ls -f json | jq -r '"'"'.[] | .addr'"'"') 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> for ip in $ALL_HOST_IPS; do 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> curl -s http://${ip}:9100/metric 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> done 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> # check grafana endpoints are responsive and database health is okay 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> curl -k -s https://${GRAFANA_IP}:3000/api/health 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> curl -k -s https://${GRAFANA_IP}:3000/api/health | jq -e '"'"'.database == "ok"'"'"' 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> # stop mon daemon in order to trigger an alert 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> ceph orch daemon stop $MON_DAEMON 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> sleep 120 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> # check prometheus endpoints are responsive and mon down alert is firing 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> curl -s http://${PROM_IP}:9095/api/v1/status/config 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> curl -s http://${PROM_IP}:9095/api/v1/status/config | jq -e '"'"'.status == "success"'"'"' 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> curl -s http://${PROM_IP}:9095/api/v1/alerts 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> curl -s http://${PROM_IP}:9095/api/v1/alerts | jq -e '"'"'.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"'"'"' 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> # check alertmanager endpoints are responsive and mon down alert is active 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> curl -s http://${ALERTM_IP}:9093/api/v2/status 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> curl -s http://${ALERTM_IP}:9093/api/v2/alerts 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> curl -s http://${ALERTM_IP}:9093/api/v2/alerts | jq -e '"'"'.[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"'"'"' 2026-03-09T20:22:17.470 DEBUG:teuthology.orchestra.run.vm03:> ' 2026-03-09T20:22:18.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:18 vm08 bash[23232]: cluster 2026-03-09T20:22:17.226623+0000 mgr.a (mgr.14150) 161 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:18.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:18 vm08 bash[23232]: cluster 2026-03-09T20:22:17.226623+0000 mgr.a (mgr.14150) 161 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:18.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:18 vm08 bash[23232]: audit 2026-03-09T20:22:17.417737+0000 mon.a (mon.0) 436 : audit [DBG] from='client.? 192.168.123.103:0/3967465564' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T20:22:18.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:18 vm08 bash[23232]: audit 2026-03-09T20:22:17.417737+0000 mon.a (mon.0) 436 : audit [DBG] from='client.? 192.168.123.103:0/3967465564' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T20:22:18.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:18 vm04 bash[22793]: cluster 2026-03-09T20:22:17.226623+0000 mgr.a (mgr.14150) 161 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:18.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:18 vm04 bash[22793]: cluster 2026-03-09T20:22:17.226623+0000 mgr.a (mgr.14150) 161 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:18.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:18 vm04 bash[22793]: audit 2026-03-09T20:22:17.417737+0000 mon.a (mon.0) 436 : audit [DBG] from='client.? 192.168.123.103:0/3967465564' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T20:22:18.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:18 vm04 bash[22793]: audit 2026-03-09T20:22:17.417737+0000 mon.a (mon.0) 436 : audit [DBG] from='client.? 192.168.123.103:0/3967465564' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T20:22:18.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:18 vm03 bash[20708]: cluster 2026-03-09T20:22:17.226623+0000 mgr.a (mgr.14150) 161 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:18.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:18 vm03 bash[20708]: cluster 2026-03-09T20:22:17.226623+0000 mgr.a (mgr.14150) 161 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:18.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:18 vm03 bash[20708]: audit 2026-03-09T20:22:17.417737+0000 mon.a (mon.0) 436 : audit [DBG] from='client.? 192.168.123.103:0/3967465564' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T20:22:18.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:18 vm03 bash[20708]: audit 2026-03-09T20:22:17.417737+0000 mon.a (mon.0) 436 : audit [DBG] from='client.? 192.168.123.103:0/3967465564' entity='client.admin' cmd=[{"prefix": "health", "format": "json"}]: dispatch 2026-03-09T20:22:20.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:20 vm08 bash[23232]: cluster 2026-03-09T20:22:19.226940+0000 mgr.a (mgr.14150) 162 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:20.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:20 vm08 bash[23232]: cluster 2026-03-09T20:22:19.226940+0000 mgr.a (mgr.14150) 162 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:20.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:20 vm04 bash[22793]: cluster 2026-03-09T20:22:19.226940+0000 mgr.a (mgr.14150) 162 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:20.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:20 vm04 bash[22793]: cluster 2026-03-09T20:22:19.226940+0000 mgr.a (mgr.14150) 162 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:20.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:20 vm03 bash[20708]: cluster 2026-03-09T20:22:19.226940+0000 mgr.a (mgr.14150) 162 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:20.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:20 vm03 bash[20708]: cluster 2026-03-09T20:22:19.226940+0000 mgr.a (mgr.14150) 162 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:21.161 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:22:21.277 INFO:teuthology.orchestra.run.vm03.stderr:+ ceph orch apply node-exporter 2026-03-09T20:22:21.438 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled node-exporter update... 2026-03-09T20:22:21.460 INFO:teuthology.orchestra.run.vm03.stderr:+ ceph orch apply grafana 2026-03-09T20:22:21.635 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled grafana update... 2026-03-09T20:22:21.652 INFO:teuthology.orchestra.run.vm03.stderr:+ ceph orch apply alertmanager 2026-03-09T20:22:21.817 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled alertmanager update... 2026-03-09T20:22:21.828 INFO:teuthology.orchestra.run.vm03.stderr:+ ceph orch apply prometheus 2026-03-09T20:22:22.072 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled prometheus update... 2026-03-09T20:22:22.092 INFO:teuthology.orchestra.run.vm03.stderr:+ sleep 240 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: cluster 2026-03-09T20:22:21.227188+0000 mgr.a (mgr.14150) 163 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: cluster 2026-03-09T20:22:21.227188+0000 mgr.a (mgr.14150) 163 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:21.437238+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:21.437238+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:21.438269+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:21.438269+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:21.634984+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:21.634984+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:21.762521+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:21.762521+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:21.762996+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:21.762996+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:21.768152+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:21.768152+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:21.813724+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:21.813724+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:22.071433+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 bash[20708]: audit 2026-03-09T20:22:22.071433+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.308 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:22 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:22.308 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 20:22:22 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: cluster 2026-03-09T20:22:21.227188+0000 mgr.a (mgr.14150) 163 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: cluster 2026-03-09T20:22:21.227188+0000 mgr.a (mgr.14150) 163 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:21.437238+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:21.437238+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:21.438269+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:21.438269+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:21.634984+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:21.634984+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:21.762521+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:21.762521+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:21.762996+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:21.762996+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:21.768152+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:21.768152+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:21.813724+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:21.813724+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:22.071433+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 bash[22793]: audit 2026-03-09T20:22:22.071433+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.656 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:22 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:22.657 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:22 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:22.657 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 20:22:22 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: cluster 2026-03-09T20:22:21.227188+0000 mgr.a (mgr.14150) 163 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: cluster 2026-03-09T20:22:21.227188+0000 mgr.a (mgr.14150) 163 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:21.437238+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:21.437238+0000 mon.a (mon.0) 437 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:21.438269+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:21.438269+0000 mon.a (mon.0) 438 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:21.634984+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:21.634984+0000 mon.a (mon.0) 439 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:21.762521+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:21.762521+0000 mon.a (mon.0) 440 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:21.762996+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:21.762996+0000 mon.a (mon.0) 441 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:21.768152+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:21.768152+0000 mon.a (mon.0) 442 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:21.813724+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:21.813724+0000 mon.a (mon.0) 443 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:22.071433+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:22 vm08 bash[23232]: audit 2026-03-09T20:22:22.071433+0000 mon.a (mon.0) 444 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:22.969 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:22 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:22.969 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:22 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:22.969 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 20:22:22 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:23.310 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:23.310 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 20:22:23 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:23.311 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:23 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:21.432466+0000 mgr.a (mgr.14150) 164 : audit [DBG] from='client.14382 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:21.432466+0000 mgr.a (mgr.14150) 164 : audit [DBG] from='client.14382 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: cephadm 2026-03-09T20:22:21.433120+0000 mgr.a (mgr.14150) 165 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: cephadm 2026-03-09T20:22:21.433120+0000 mgr.a (mgr.14150) 165 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:21.630014+0000 mgr.a (mgr.14150) 166 : audit [DBG] from='client.24263 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:21.630014+0000 mgr.a (mgr.14150) 166 : audit [DBG] from='client.24263 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: cephadm 2026-03-09T20:22:21.630729+0000 mgr.a (mgr.14150) 167 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: cephadm 2026-03-09T20:22:21.630729+0000 mgr.a (mgr.14150) 167 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: cephadm 2026-03-09T20:22:21.769537+0000 mgr.a (mgr.14150) 168 : cephadm [INF] Deploying daemon node-exporter.vm03 on vm03 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: cephadm 2026-03-09T20:22:21.769537+0000 mgr.a (mgr.14150) 168 : cephadm [INF] Deploying daemon node-exporter.vm03 on vm03 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:21.808899+0000 mgr.a (mgr.14150) 169 : audit [DBG] from='client.24269 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:21.808899+0000 mgr.a (mgr.14150) 169 : audit [DBG] from='client.24269 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: cephadm 2026-03-09T20:22:21.809657+0000 mgr.a (mgr.14150) 170 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: cephadm 2026-03-09T20:22:21.809657+0000 mgr.a (mgr.14150) 170 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:21.995930+0000 mgr.a (mgr.14150) 171 : audit [DBG] from='client.24275 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:21.995930+0000 mgr.a (mgr.14150) 171 : audit [DBG] from='client.24275 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: cephadm 2026-03-09T20:22:21.996592+0000 mgr.a (mgr.14150) 172 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: cephadm 2026-03-09T20:22:21.996592+0000 mgr.a (mgr.14150) 172 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:22.450565+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:22.450565+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:22.454885+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:22.454885+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:22.459336+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:22.459336+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:23.201464+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:23.201464+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:23.209785+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:23.209785+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:23.214159+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.620 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:23 vm04 bash[22793]: audit 2026-03-09T20:22:23.214159+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:21.432466+0000 mgr.a (mgr.14150) 164 : audit [DBG] from='client.14382 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:21.432466+0000 mgr.a (mgr.14150) 164 : audit [DBG] from='client.14382 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: cephadm 2026-03-09T20:22:21.433120+0000 mgr.a (mgr.14150) 165 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: cephadm 2026-03-09T20:22:21.433120+0000 mgr.a (mgr.14150) 165 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:21.630014+0000 mgr.a (mgr.14150) 166 : audit [DBG] from='client.24263 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:21.630014+0000 mgr.a (mgr.14150) 166 : audit [DBG] from='client.24263 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: cephadm 2026-03-09T20:22:21.630729+0000 mgr.a (mgr.14150) 167 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: cephadm 2026-03-09T20:22:21.630729+0000 mgr.a (mgr.14150) 167 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: cephadm 2026-03-09T20:22:21.769537+0000 mgr.a (mgr.14150) 168 : cephadm [INF] Deploying daemon node-exporter.vm03 on vm03 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: cephadm 2026-03-09T20:22:21.769537+0000 mgr.a (mgr.14150) 168 : cephadm [INF] Deploying daemon node-exporter.vm03 on vm03 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:21.808899+0000 mgr.a (mgr.14150) 169 : audit [DBG] from='client.24269 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:21.808899+0000 mgr.a (mgr.14150) 169 : audit [DBG] from='client.24269 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: cephadm 2026-03-09T20:22:21.809657+0000 mgr.a (mgr.14150) 170 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: cephadm 2026-03-09T20:22:21.809657+0000 mgr.a (mgr.14150) 170 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:21.995930+0000 mgr.a (mgr.14150) 171 : audit [DBG] from='client.24275 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:21.995930+0000 mgr.a (mgr.14150) 171 : audit [DBG] from='client.24275 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: cephadm 2026-03-09T20:22:21.996592+0000 mgr.a (mgr.14150) 172 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: cephadm 2026-03-09T20:22:21.996592+0000 mgr.a (mgr.14150) 172 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-09T20:22:23.630 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:22.450565+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.631 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:22.450565+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.631 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:22.454885+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.631 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:22.454885+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.631 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:22.459336+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.631 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:22.459336+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.631 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:23.201464+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.631 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:23.201464+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.631 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:23.209785+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.631 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:23.209785+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.631 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:23.214159+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.631 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 bash[23232]: audit 2026-03-09T20:22:23.214159+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:21.432466+0000 mgr.a (mgr.14150) 164 : audit [DBG] from='client.14382 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:21.432466+0000 mgr.a (mgr.14150) 164 : audit [DBG] from='client.14382 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "node-exporter", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: cephadm 2026-03-09T20:22:21.433120+0000 mgr.a (mgr.14150) 165 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: cephadm 2026-03-09T20:22:21.433120+0000 mgr.a (mgr.14150) 165 : cephadm [INF] Saving service node-exporter spec with placement * 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:21.630014+0000 mgr.a (mgr.14150) 166 : audit [DBG] from='client.24263 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:21.630014+0000 mgr.a (mgr.14150) 166 : audit [DBG] from='client.24263 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "grafana", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: cephadm 2026-03-09T20:22:21.630729+0000 mgr.a (mgr.14150) 167 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: cephadm 2026-03-09T20:22:21.630729+0000 mgr.a (mgr.14150) 167 : cephadm [INF] Saving service grafana spec with placement count:1 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: cephadm 2026-03-09T20:22:21.769537+0000 mgr.a (mgr.14150) 168 : cephadm [INF] Deploying daemon node-exporter.vm03 on vm03 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: cephadm 2026-03-09T20:22:21.769537+0000 mgr.a (mgr.14150) 168 : cephadm [INF] Deploying daemon node-exporter.vm03 on vm03 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:21.808899+0000 mgr.a (mgr.14150) 169 : audit [DBG] from='client.24269 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:21.808899+0000 mgr.a (mgr.14150) 169 : audit [DBG] from='client.24269 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "alertmanager", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: cephadm 2026-03-09T20:22:21.809657+0000 mgr.a (mgr.14150) 170 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: cephadm 2026-03-09T20:22:21.809657+0000 mgr.a (mgr.14150) 170 : cephadm [INF] Saving service alertmanager spec with placement count:1 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:21.995930+0000 mgr.a (mgr.14150) 171 : audit [DBG] from='client.24275 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:21.995930+0000 mgr.a (mgr.14150) 171 : audit [DBG] from='client.24275 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "prometheus", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: cephadm 2026-03-09T20:22:21.996592+0000 mgr.a (mgr.14150) 172 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: cephadm 2026-03-09T20:22:21.996592+0000 mgr.a (mgr.14150) 172 : cephadm [INF] Saving service prometheus spec with placement count:1 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:22.450565+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:22.450565+0000 mon.a (mon.0) 445 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:22.454885+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:22.454885+0000 mon.a (mon.0) 446 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:22.459336+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:22.459336+0000 mon.a (mon.0) 447 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:23.201464+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:23.201464+0000 mon.a (mon.0) 448 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:23.209785+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:23.209785+0000 mon.a (mon.0) 449 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:23.214159+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:23 vm03 bash[20708]: audit 2026-03-09T20:22:23.214159+0000 mon.a (mon.0) 450 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:23.889 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:23.889 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:23 vm08 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:23.891 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 09 20:22:23 vm08 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:23.891 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 09 20:22:23 vm08 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: cephadm 2026-03-09T20:22:22.459999+0000 mgr.a (mgr.14150) 173 : cephadm [INF] Deploying daemon node-exporter.vm04 on vm04 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: cephadm 2026-03-09T20:22:22.459999+0000 mgr.a (mgr.14150) 173 : cephadm [INF] Deploying daemon node-exporter.vm04 on vm04 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: cephadm 2026-03-09T20:22:23.214908+0000 mgr.a (mgr.14150) 174 : cephadm [INF] Deploying daemon node-exporter.vm08 on vm08 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: cephadm 2026-03-09T20:22:23.214908+0000 mgr.a (mgr.14150) 174 : cephadm [INF] Deploying daemon node-exporter.vm08 on vm08 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: cluster 2026-03-09T20:22:23.227399+0000 mgr.a (mgr.14150) 175 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: cluster 2026-03-09T20:22:23.227399+0000 mgr.a (mgr.14150) 175 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.917374+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.917374+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.922151+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.922151+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.926424+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.926424+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.930999+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.930999+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.965647+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.965647+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.969084+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.969084+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.971242+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.971242+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.974131+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:24 vm04 bash[22793]: audit 2026-03-09T20:22:23.974131+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: cephadm 2026-03-09T20:22:22.459999+0000 mgr.a (mgr.14150) 173 : cephadm [INF] Deploying daemon node-exporter.vm04 on vm04 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: cephadm 2026-03-09T20:22:22.459999+0000 mgr.a (mgr.14150) 173 : cephadm [INF] Deploying daemon node-exporter.vm04 on vm04 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: cephadm 2026-03-09T20:22:23.214908+0000 mgr.a (mgr.14150) 174 : cephadm [INF] Deploying daemon node-exporter.vm08 on vm08 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: cephadm 2026-03-09T20:22:23.214908+0000 mgr.a (mgr.14150) 174 : cephadm [INF] Deploying daemon node-exporter.vm08 on vm08 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: cluster 2026-03-09T20:22:23.227399+0000 mgr.a (mgr.14150) 175 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: cluster 2026-03-09T20:22:23.227399+0000 mgr.a (mgr.14150) 175 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.917374+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.917374+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.922151+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.922151+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.926424+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.926424+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.930999+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.930999+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.965647+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.965647+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.969084+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.969084+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.971242+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.971242+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.974131+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:24 vm03 bash[20708]: audit 2026-03-09T20:22:23.974131+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: cephadm 2026-03-09T20:22:22.459999+0000 mgr.a (mgr.14150) 173 : cephadm [INF] Deploying daemon node-exporter.vm04 on vm04 2026-03-09T20:22:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: cephadm 2026-03-09T20:22:22.459999+0000 mgr.a (mgr.14150) 173 : cephadm [INF] Deploying daemon node-exporter.vm04 on vm04 2026-03-09T20:22:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: cephadm 2026-03-09T20:22:23.214908+0000 mgr.a (mgr.14150) 174 : cephadm [INF] Deploying daemon node-exporter.vm08 on vm08 2026-03-09T20:22:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: cephadm 2026-03-09T20:22:23.214908+0000 mgr.a (mgr.14150) 174 : cephadm [INF] Deploying daemon node-exporter.vm08 on vm08 2026-03-09T20:22:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: cluster 2026-03-09T20:22:23.227399+0000 mgr.a (mgr.14150) 175 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: cluster 2026-03-09T20:22:23.227399+0000 mgr.a (mgr.14150) 175 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.917374+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.917374+0000 mon.a (mon.0) 451 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.922151+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.922151+0000 mon.a (mon.0) 452 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.926424+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.926424+0000 mon.a (mon.0) 453 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.930999+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.930999+0000 mon.a (mon.0) 454 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.965647+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.965647+0000 mon.a (mon.0) 455 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.969084+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.969084+0000 mon.a (mon.0) 456 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.971242+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:22:24.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.971242+0000 mon.a (mon.0) 457 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:22:24.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.974131+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:24.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:24 vm08 bash[23232]: audit 2026-03-09T20:22:23.974131+0000 mon.a (mon.0) 458 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:25.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:25 vm08 bash[23232]: cephadm 2026-03-09T20:22:23.938459+0000 mgr.a (mgr.14150) 176 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T20:22:25.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:25 vm08 bash[23232]: cephadm 2026-03-09T20:22:23.938459+0000 mgr.a (mgr.14150) 176 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T20:22:25.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:25 vm08 bash[23232]: audit 2026-03-09T20:22:23.971505+0000 mgr.a (mgr.14150) 177 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:22:25.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:25 vm08 bash[23232]: audit 2026-03-09T20:22:23.971505+0000 mgr.a (mgr.14150) 177 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:22:25.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:25 vm08 bash[23232]: cephadm 2026-03-09T20:22:23.980936+0000 mgr.a (mgr.14150) 178 : cephadm [INF] Deploying daemon grafana.vm03 on vm03 2026-03-09T20:22:25.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:25 vm08 bash[23232]: cephadm 2026-03-09T20:22:23.980936+0000 mgr.a (mgr.14150) 178 : cephadm [INF] Deploying daemon grafana.vm03 on vm03 2026-03-09T20:22:25.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:25 vm08 bash[23232]: audit 2026-03-09T20:22:24.376901+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:25.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:25 vm08 bash[23232]: audit 2026-03-09T20:22:24.376901+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:25.654 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:25 vm04 bash[22793]: cephadm 2026-03-09T20:22:23.938459+0000 mgr.a (mgr.14150) 176 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T20:22:25.654 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:25 vm04 bash[22793]: cephadm 2026-03-09T20:22:23.938459+0000 mgr.a (mgr.14150) 176 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T20:22:25.654 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:25 vm04 bash[22793]: audit 2026-03-09T20:22:23.971505+0000 mgr.a (mgr.14150) 177 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:22:25.654 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:25 vm04 bash[22793]: audit 2026-03-09T20:22:23.971505+0000 mgr.a (mgr.14150) 177 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:22:25.654 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:25 vm04 bash[22793]: cephadm 2026-03-09T20:22:23.980936+0000 mgr.a (mgr.14150) 178 : cephadm [INF] Deploying daemon grafana.vm03 on vm03 2026-03-09T20:22:25.654 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:25 vm04 bash[22793]: cephadm 2026-03-09T20:22:23.980936+0000 mgr.a (mgr.14150) 178 : cephadm [INF] Deploying daemon grafana.vm03 on vm03 2026-03-09T20:22:25.654 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:25 vm04 bash[22793]: audit 2026-03-09T20:22:24.376901+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:25.654 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:25 vm04 bash[22793]: audit 2026-03-09T20:22:24.376901+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:25 vm03 bash[20708]: cephadm 2026-03-09T20:22:23.938459+0000 mgr.a (mgr.14150) 176 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T20:22:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:25 vm03 bash[20708]: cephadm 2026-03-09T20:22:23.938459+0000 mgr.a (mgr.14150) 176 : cephadm [INF] Regenerating cephadm self-signed grafana TLS certificates 2026-03-09T20:22:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:25 vm03 bash[20708]: audit 2026-03-09T20:22:23.971505+0000 mgr.a (mgr.14150) 177 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:22:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:25 vm03 bash[20708]: audit 2026-03-09T20:22:23.971505+0000 mgr.a (mgr.14150) 177 : audit [DBG] from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch 2026-03-09T20:22:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:25 vm03 bash[20708]: cephadm 2026-03-09T20:22:23.980936+0000 mgr.a (mgr.14150) 178 : cephadm [INF] Deploying daemon grafana.vm03 on vm03 2026-03-09T20:22:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:25 vm03 bash[20708]: cephadm 2026-03-09T20:22:23.980936+0000 mgr.a (mgr.14150) 178 : cephadm [INF] Deploying daemon grafana.vm03 on vm03 2026-03-09T20:22:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:25 vm03 bash[20708]: audit 2026-03-09T20:22:24.376901+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:25 vm03 bash[20708]: audit 2026-03-09T20:22:24.376901+0000 mon.a (mon.0) 459 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:26.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:26 vm03 bash[20708]: cluster 2026-03-09T20:22:25.227653+0000 mgr.a (mgr.14150) 179 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:26.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:26 vm03 bash[20708]: cluster 2026-03-09T20:22:25.227653+0000 mgr.a (mgr.14150) 179 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:26.795 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:26 vm04 bash[22793]: cluster 2026-03-09T20:22:25.227653+0000 mgr.a (mgr.14150) 179 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:26.795 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:26 vm04 bash[22793]: cluster 2026-03-09T20:22:25.227653+0000 mgr.a (mgr.14150) 179 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:26.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:26 vm08 bash[23232]: cluster 2026-03-09T20:22:25.227653+0000 mgr.a (mgr.14150) 179 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:26.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:26 vm08 bash[23232]: cluster 2026-03-09T20:22:25.227653+0000 mgr.a (mgr.14150) 179 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:27 vm08 bash[23232]: cluster 2026-03-09T20:22:27.227922+0000 mgr.a (mgr.14150) 180 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:27.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:27 vm08 bash[23232]: cluster 2026-03-09T20:22:27.227922+0000 mgr.a (mgr.14150) 180 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:27.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:27 vm04 bash[22793]: cluster 2026-03-09T20:22:27.227922+0000 mgr.a (mgr.14150) 180 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:27.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:27 vm04 bash[22793]: cluster 2026-03-09T20:22:27.227922+0000 mgr.a (mgr.14150) 180 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:27.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:27 vm03 bash[20708]: cluster 2026-03-09T20:22:27.227922+0000 mgr.a (mgr.14150) 180 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:27.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:27 vm03 bash[20708]: cluster 2026-03-09T20:22:27.227922+0000 mgr.a (mgr.14150) 180 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:30.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:30 vm08 bash[23232]: cluster 2026-03-09T20:22:29.228159+0000 mgr.a (mgr.14150) 181 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:30.556 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:30 vm08 bash[23232]: cluster 2026-03-09T20:22:29.228159+0000 mgr.a (mgr.14150) 181 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:30.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:30 vm04 bash[22793]: cluster 2026-03-09T20:22:29.228159+0000 mgr.a (mgr.14150) 181 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:30.619 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:30 vm04 bash[22793]: cluster 2026-03-09T20:22:29.228159+0000 mgr.a (mgr.14150) 181 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:30 vm03 bash[20708]: cluster 2026-03-09T20:22:29.228159+0000 mgr.a (mgr.14150) 181 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:30.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:30 vm03 bash[20708]: cluster 2026-03-09T20:22:29.228159+0000 mgr.a (mgr.14150) 181 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:32.783 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:32 vm03 bash[20708]: cluster 2026-03-09T20:22:31.228398+0000 mgr.a (mgr.14150) 182 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:32.783 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:32 vm03 bash[20708]: cluster 2026-03-09T20:22:31.228398+0000 mgr.a (mgr.14150) 182 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:32.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:32 vm08 bash[23232]: cluster 2026-03-09T20:22:31.228398+0000 mgr.a (mgr.14150) 182 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:32.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:32 vm08 bash[23232]: cluster 2026-03-09T20:22:31.228398+0000 mgr.a (mgr.14150) 182 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:32.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:32 vm04 bash[22793]: cluster 2026-03-09T20:22:31.228398+0000 mgr.a (mgr.14150) 182 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:32.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:32 vm04 bash[22793]: cluster 2026-03-09T20:22:31.228398+0000 mgr.a (mgr.14150) 182 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:33.358 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:33 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:33.358 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:33 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:33.359 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:33 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:33.359 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:33 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:33.359 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 20:22:33 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:33.359 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 20:22:33 vm03 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:33.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:33 vm03 bash[20708]: cluster 2026-03-09T20:22:33.228658+0000 mgr.a (mgr.14150) 183 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:33.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:33 vm03 bash[20708]: cluster 2026-03-09T20:22:33.228658+0000 mgr.a (mgr.14150) 183 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:33.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:33 vm03 bash[20708]: audit 2026-03-09T20:22:33.390284+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:33 vm03 bash[20708]: audit 2026-03-09T20:22:33.390284+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:33 vm03 bash[20708]: audit 2026-03-09T20:22:33.395391+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:33 vm03 bash[20708]: audit 2026-03-09T20:22:33.395391+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:33 vm03 bash[20708]: audit 2026-03-09T20:22:33.400040+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:33 vm03 bash[20708]: audit 2026-03-09T20:22:33.400040+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:33 vm03 bash[20708]: audit 2026-03-09T20:22:33.406576+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:33 vm03 bash[20708]: audit 2026-03-09T20:22:33.406576+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:33 vm08 bash[23232]: cluster 2026-03-09T20:22:33.228658+0000 mgr.a (mgr.14150) 183 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:33.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:33 vm08 bash[23232]: cluster 2026-03-09T20:22:33.228658+0000 mgr.a (mgr.14150) 183 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:33.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:33 vm08 bash[23232]: audit 2026-03-09T20:22:33.390284+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:33 vm08 bash[23232]: audit 2026-03-09T20:22:33.390284+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:33 vm08 bash[23232]: audit 2026-03-09T20:22:33.395391+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:33 vm08 bash[23232]: audit 2026-03-09T20:22:33.395391+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:33 vm08 bash[23232]: audit 2026-03-09T20:22:33.400040+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:33 vm08 bash[23232]: audit 2026-03-09T20:22:33.400040+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:33 vm08 bash[23232]: audit 2026-03-09T20:22:33.406576+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:33 vm08 bash[23232]: audit 2026-03-09T20:22:33.406576+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:33 vm04 bash[22793]: cluster 2026-03-09T20:22:33.228658+0000 mgr.a (mgr.14150) 183 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:33.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:33 vm04 bash[22793]: cluster 2026-03-09T20:22:33.228658+0000 mgr.a (mgr.14150) 183 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:33.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:33 vm04 bash[22793]: audit 2026-03-09T20:22:33.390284+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:33 vm04 bash[22793]: audit 2026-03-09T20:22:33.390284+0000 mon.a (mon.0) 460 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:33 vm04 bash[22793]: audit 2026-03-09T20:22:33.395391+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:33 vm04 bash[22793]: audit 2026-03-09T20:22:33.395391+0000 mon.a (mon.0) 461 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:33 vm04 bash[22793]: audit 2026-03-09T20:22:33.400040+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:33 vm04 bash[22793]: audit 2026-03-09T20:22:33.400040+0000 mon.a (mon.0) 462 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:33 vm04 bash[22793]: audit 2026-03-09T20:22:33.406576+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:33.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:33 vm04 bash[22793]: audit 2026-03-09T20:22:33.406576+0000 mon.a (mon.0) 463 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:34.669 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:34 vm03 bash[20708]: audit 2026-03-09T20:22:33.423190+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:34.669 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:34 vm03 bash[20708]: audit 2026-03-09T20:22:33.423190+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:34.669 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:34 vm03 bash[20708]: audit 2026-03-09T20:22:34.382878+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:34.669 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:34 vm03 bash[20708]: audit 2026-03-09T20:22:34.382878+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:34.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:34 vm08 bash[23232]: audit 2026-03-09T20:22:33.423190+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:34.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:34 vm08 bash[23232]: audit 2026-03-09T20:22:33.423190+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:34.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:34 vm08 bash[23232]: audit 2026-03-09T20:22:34.382878+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:34.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:34 vm08 bash[23232]: audit 2026-03-09T20:22:34.382878+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:34.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:34 vm04 bash[22793]: audit 2026-03-09T20:22:33.423190+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:34.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:34 vm04 bash[22793]: audit 2026-03-09T20:22:33.423190+0000 mon.a (mon.0) 464 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:34.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:34 vm04 bash[22793]: audit 2026-03-09T20:22:34.382878+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:34.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:34 vm04 bash[22793]: audit 2026-03-09T20:22:34.382878+0000 mon.a (mon.0) 465 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:35.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:35 vm08 bash[23232]: cluster 2026-03-09T20:22:35.228928+0000 mgr.a (mgr.14150) 184 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:35.806 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:35 vm08 bash[23232]: cluster 2026-03-09T20:22:35.228928+0000 mgr.a (mgr.14150) 184 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:35.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:35 vm04 bash[22793]: cluster 2026-03-09T20:22:35.228928+0000 mgr.a (mgr.14150) 184 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:35.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:35 vm04 bash[22793]: cluster 2026-03-09T20:22:35.228928+0000 mgr.a (mgr.14150) 184 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:35.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:35 vm03 bash[20708]: cluster 2026-03-09T20:22:35.228928+0000 mgr.a (mgr.14150) 184 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:35.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:35 vm03 bash[20708]: cluster 2026-03-09T20:22:35.228928+0000 mgr.a (mgr.14150) 184 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:38.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:38 vm08 bash[23232]: cluster 2026-03-09T20:22:37.229167+0000 mgr.a (mgr.14150) 185 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:38.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:38 vm08 bash[23232]: cluster 2026-03-09T20:22:37.229167+0000 mgr.a (mgr.14150) 185 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:38.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:38 vm04 bash[22793]: cluster 2026-03-09T20:22:37.229167+0000 mgr.a (mgr.14150) 185 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:38.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:38 vm04 bash[22793]: cluster 2026-03-09T20:22:37.229167+0000 mgr.a (mgr.14150) 185 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:38.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:38 vm03 bash[20708]: cluster 2026-03-09T20:22:37.229167+0000 mgr.a (mgr.14150) 185 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:38.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:38 vm03 bash[20708]: cluster 2026-03-09T20:22:37.229167+0000 mgr.a (mgr.14150) 185 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.425279+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.425279+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.431214+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.431214+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.435881+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.435881+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.442689+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.442689+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.616195+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.616195+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.620988+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.620988+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.762531+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.762531+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.763126+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.763126+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.767673+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: audit 2026-03-09T20:22:38.767673+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: cephadm 2026-03-09T20:22:38.773068+0000 mgr.a (mgr.14150) 186 : cephadm [INF] Deploying daemon alertmanager.vm08 on vm08 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: cephadm 2026-03-09T20:22:38.773068+0000 mgr.a (mgr.14150) 186 : cephadm [INF] Deploying daemon alertmanager.vm08 on vm08 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: cluster 2026-03-09T20:22:39.229390+0000 mgr.a (mgr.14150) 187 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:39.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:39 vm08 bash[23232]: cluster 2026-03-09T20:22:39.229390+0000 mgr.a (mgr.14150) 187 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.425279+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.425279+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.431214+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.431214+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.435881+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.435881+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.442689+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.442689+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.616195+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.616195+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.620988+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.620988+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.762531+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.762531+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.763126+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.763126+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:39.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.767673+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: audit 2026-03-09T20:22:38.767673+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: cephadm 2026-03-09T20:22:38.773068+0000 mgr.a (mgr.14150) 186 : cephadm [INF] Deploying daemon alertmanager.vm08 on vm08 2026-03-09T20:22:39.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: cephadm 2026-03-09T20:22:38.773068+0000 mgr.a (mgr.14150) 186 : cephadm [INF] Deploying daemon alertmanager.vm08 on vm08 2026-03-09T20:22:39.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: cluster 2026-03-09T20:22:39.229390+0000 mgr.a (mgr.14150) 187 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:39.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:39 vm04 bash[22793]: cluster 2026-03-09T20:22:39.229390+0000 mgr.a (mgr.14150) 187 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.425279+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.425279+0000 mon.a (mon.0) 466 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.431214+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.431214+0000 mon.a (mon.0) 467 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.435881+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.435881+0000 mon.a (mon.0) 468 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.442689+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.442689+0000 mon.a (mon.0) 469 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.616195+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.616195+0000 mon.a (mon.0) 470 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.620988+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.620988+0000 mon.a (mon.0) 471 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.762531+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.762531+0000 mon.a (mon.0) 472 : audit [DBG] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.763126+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.763126+0000 mon.a (mon.0) 473 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.767673+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: audit 2026-03-09T20:22:38.767673+0000 mon.a (mon.0) 474 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: cephadm 2026-03-09T20:22:38.773068+0000 mgr.a (mgr.14150) 186 : cephadm [INF] Deploying daemon alertmanager.vm08 on vm08 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: cephadm 2026-03-09T20:22:38.773068+0000 mgr.a (mgr.14150) 186 : cephadm [INF] Deploying daemon alertmanager.vm08 on vm08 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: cluster 2026-03-09T20:22:39.229390+0000 mgr.a (mgr.14150) 187 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:39 vm03 bash[20708]: cluster 2026-03-09T20:22:39.229390+0000 mgr.a (mgr.14150) 187 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:42.550 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:42 vm08 bash[23232]: cluster 2026-03-09T20:22:41.229604+0000 mgr.a (mgr.14150) 188 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:42.550 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:42 vm08 bash[23232]: cluster 2026-03-09T20:22:41.229604+0000 mgr.a (mgr.14150) 188 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:42.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:42 vm04 bash[22793]: cluster 2026-03-09T20:22:41.229604+0000 mgr.a (mgr.14150) 188 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:42.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:42 vm04 bash[22793]: cluster 2026-03-09T20:22:41.229604+0000 mgr.a (mgr.14150) 188 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:42.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:42 vm03 bash[20708]: cluster 2026-03-09T20:22:41.229604+0000 mgr.a (mgr.14150) 188 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:42.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:42 vm03 bash[20708]: cluster 2026-03-09T20:22:41.229604+0000 mgr.a (mgr.14150) 188 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:43.151 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:42 vm08 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:43.151 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:43 vm08 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:43.151 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 09 20:22:42 vm08 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:43.151 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 09 20:22:43 vm08 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:44.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:44 vm08 bash[23232]: audit 2026-03-09T20:22:43.192444+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:44 vm08 bash[23232]: audit 2026-03-09T20:22:43.192444+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:44 vm08 bash[23232]: audit 2026-03-09T20:22:43.197048+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:44 vm08 bash[23232]: audit 2026-03-09T20:22:43.197048+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:44 vm08 bash[23232]: audit 2026-03-09T20:22:43.202367+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:44 vm08 bash[23232]: audit 2026-03-09T20:22:43.202367+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:44 vm08 bash[23232]: audit 2026-03-09T20:22:43.206424+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:44 vm08 bash[23232]: audit 2026-03-09T20:22:43.206424+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:44 vm08 bash[23232]: cluster 2026-03-09T20:22:43.229806+0000 mgr.a (mgr.14150) 189 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:44.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:44 vm08 bash[23232]: cluster 2026-03-09T20:22:43.229806+0000 mgr.a (mgr.14150) 189 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:44.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:44 vm08 bash[23232]: cephadm 2026-03-09T20:22:43.363620+0000 mgr.a (mgr.14150) 190 : cephadm [INF] Deploying daemon prometheus.vm04 on vm04 2026-03-09T20:22:44.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:44 vm08 bash[23232]: cephadm 2026-03-09T20:22:43.363620+0000 mgr.a (mgr.14150) 190 : cephadm [INF] Deploying daemon prometheus.vm04 on vm04 2026-03-09T20:22:44.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:44 vm04 bash[22793]: audit 2026-03-09T20:22:43.192444+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:44 vm04 bash[22793]: audit 2026-03-09T20:22:43.192444+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:44 vm04 bash[22793]: audit 2026-03-09T20:22:43.197048+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:44 vm04 bash[22793]: audit 2026-03-09T20:22:43.197048+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:44 vm04 bash[22793]: audit 2026-03-09T20:22:43.202367+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:44 vm04 bash[22793]: audit 2026-03-09T20:22:43.202367+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:44 vm04 bash[22793]: audit 2026-03-09T20:22:43.206424+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:44 vm04 bash[22793]: audit 2026-03-09T20:22:43.206424+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:44 vm04 bash[22793]: cluster 2026-03-09T20:22:43.229806+0000 mgr.a (mgr.14150) 189 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:44.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:44 vm04 bash[22793]: cluster 2026-03-09T20:22:43.229806+0000 mgr.a (mgr.14150) 189 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:44.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:44 vm04 bash[22793]: cephadm 2026-03-09T20:22:43.363620+0000 mgr.a (mgr.14150) 190 : cephadm [INF] Deploying daemon prometheus.vm04 on vm04 2026-03-09T20:22:44.618 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:44 vm04 bash[22793]: cephadm 2026-03-09T20:22:43.363620+0000 mgr.a (mgr.14150) 190 : cephadm [INF] Deploying daemon prometheus.vm04 on vm04 2026-03-09T20:22:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:44 vm03 bash[20708]: audit 2026-03-09T20:22:43.192444+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:44 vm03 bash[20708]: audit 2026-03-09T20:22:43.192444+0000 mon.a (mon.0) 475 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:44 vm03 bash[20708]: audit 2026-03-09T20:22:43.197048+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:44 vm03 bash[20708]: audit 2026-03-09T20:22:43.197048+0000 mon.a (mon.0) 476 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:44 vm03 bash[20708]: audit 2026-03-09T20:22:43.202367+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:44 vm03 bash[20708]: audit 2026-03-09T20:22:43.202367+0000 mon.a (mon.0) 477 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:44 vm03 bash[20708]: audit 2026-03-09T20:22:43.206424+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:44 vm03 bash[20708]: audit 2026-03-09T20:22:43.206424+0000 mon.a (mon.0) 478 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:44 vm03 bash[20708]: cluster 2026-03-09T20:22:43.229806+0000 mgr.a (mgr.14150) 189 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:44 vm03 bash[20708]: cluster 2026-03-09T20:22:43.229806+0000 mgr.a (mgr.14150) 189 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:44 vm03 bash[20708]: cephadm 2026-03-09T20:22:43.363620+0000 mgr.a (mgr.14150) 190 : cephadm [INF] Deploying daemon prometheus.vm04 on vm04 2026-03-09T20:22:44.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:44 vm03 bash[20708]: cephadm 2026-03-09T20:22:43.363620+0000 mgr.a (mgr.14150) 190 : cephadm [INF] Deploying daemon prometheus.vm04 on vm04 2026-03-09T20:22:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:45 vm08 bash[23232]: audit 2026-03-09T20:22:44.388007+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:45.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:45 vm08 bash[23232]: audit 2026-03-09T20:22:44.388007+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:45 vm03 bash[20708]: audit 2026-03-09T20:22:44.388007+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:45.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:45 vm03 bash[20708]: audit 2026-03-09T20:22:44.388007+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:45.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:45 vm04 bash[22793]: audit 2026-03-09T20:22:44.388007+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:45.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:45 vm04 bash[22793]: audit 2026-03-09T20:22:44.388007+0000 mon.a (mon.0) 479 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:46.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:46 vm03 bash[20708]: cluster 2026-03-09T20:22:45.230091+0000 mgr.a (mgr.14150) 191 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:46.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:46 vm03 bash[20708]: cluster 2026-03-09T20:22:45.230091+0000 mgr.a (mgr.14150) 191 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:46.753 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:46 vm04 bash[22793]: cluster 2026-03-09T20:22:45.230091+0000 mgr.a (mgr.14150) 191 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:46.753 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:46 vm04 bash[22793]: cluster 2026-03-09T20:22:45.230091+0000 mgr.a (mgr.14150) 191 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:46.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:46 vm08 bash[23232]: cluster 2026-03-09T20:22:45.230091+0000 mgr.a (mgr.14150) 191 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:46.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:46 vm08 bash[23232]: cluster 2026-03-09T20:22:45.230091+0000 mgr.a (mgr.14150) 191 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:47.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:47 vm03 bash[20708]: cluster 2026-03-09T20:22:47.230376+0000 mgr.a (mgr.14150) 192 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:47.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:47 vm03 bash[20708]: cluster 2026-03-09T20:22:47.230376+0000 mgr.a (mgr.14150) 192 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:47.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:47 vm08 bash[23232]: cluster 2026-03-09T20:22:47.230376+0000 mgr.a (mgr.14150) 192 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:47.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:47 vm08 bash[23232]: cluster 2026-03-09T20:22:47.230376+0000 mgr.a (mgr.14150) 192 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:47.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:47 vm04 bash[22793]: cluster 2026-03-09T20:22:47.230376+0000 mgr.a (mgr.14150) 192 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:47.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:47 vm04 bash[22793]: cluster 2026-03-09T20:22:47.230376+0000 mgr.a (mgr.14150) 192 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:49.457 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 20:22:49 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:49.457 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 20:22:49 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:49.458 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:49 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:49.458 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:49 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:49.458 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:49 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:49.458 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:49 vm04 systemd[1]: /etc/systemd/system/ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@.service:23: Unit configured to use KillMode=none. This is unsafe, as it disables systemd's process lifecycle management for the service. Please update your service to use a safer KillMode=, such as 'mixed' or 'control-group'. Support for KillMode=none is deprecated and will eventually be removed. 2026-03-09T20:22:50.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:50 vm08 bash[23232]: cluster 2026-03-09T20:22:49.230670+0000 mgr.a (mgr.14150) 193 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:50.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:50 vm08 bash[23232]: cluster 2026-03-09T20:22:49.230670+0000 mgr.a (mgr.14150) 193 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:50.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:50 vm08 bash[23232]: audit 2026-03-09T20:22:49.481604+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:50 vm08 bash[23232]: audit 2026-03-09T20:22:49.481604+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:50 vm08 bash[23232]: audit 2026-03-09T20:22:49.485204+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:50 vm08 bash[23232]: audit 2026-03-09T20:22:49.485204+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:50 vm08 bash[23232]: audit 2026-03-09T20:22:49.488312+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:50 vm08 bash[23232]: audit 2026-03-09T20:22:49.488312+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:50 vm08 bash[23232]: audit 2026-03-09T20:22:49.490318+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T20:22:50.557 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:50 vm08 bash[23232]: audit 2026-03-09T20:22:49.490318+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T20:22:50.597 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20708]: cluster 2026-03-09T20:22:49.230670+0000 mgr.a (mgr.14150) 193 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:50.597 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20708]: cluster 2026-03-09T20:22:49.230670+0000 mgr.a (mgr.14150) 193 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:50.597 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20708]: audit 2026-03-09T20:22:49.481604+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.597 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20708]: audit 2026-03-09T20:22:49.481604+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.597 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20708]: audit 2026-03-09T20:22:49.485204+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.597 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20708]: audit 2026-03-09T20:22:49.485204+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.597 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20708]: audit 2026-03-09T20:22:49.488312+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.597 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20708]: audit 2026-03-09T20:22:49.488312+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.597 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20708]: audit 2026-03-09T20:22:49.490318+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T20:22:50.597 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20708]: audit 2026-03-09T20:22:49.490318+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T20:22:50.597 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20968]: ignoring --setuser ceph since I am not root 2026-03-09T20:22:50.597 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20968]: ignoring --setgroup ceph since I am not root 2026-03-09T20:22:50.603 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[23235]: ignoring --setuser ceph since I am not root 2026-03-09T20:22:50.603 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[23235]: ignoring --setgroup ceph since I am not root 2026-03-09T20:22:50.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[22793]: cluster 2026-03-09T20:22:49.230670+0000 mgr.a (mgr.14150) 193 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:50.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[22793]: cluster 2026-03-09T20:22:49.230670+0000 mgr.a (mgr.14150) 193 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 481 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:50.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[22793]: audit 2026-03-09T20:22:49.481604+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[22793]: audit 2026-03-09T20:22:49.481604+0000 mon.a (mon.0) 480 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[22793]: audit 2026-03-09T20:22:49.485204+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[22793]: audit 2026-03-09T20:22:49.485204+0000 mon.a (mon.0) 481 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[22793]: audit 2026-03-09T20:22:49.488312+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[22793]: audit 2026-03-09T20:22:49.488312+0000 mon.a (mon.0) 482 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' 2026-03-09T20:22:50.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[22793]: audit 2026-03-09T20:22:49.490318+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T20:22:50.604 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[22793]: audit 2026-03-09T20:22:49.490318+0000 mon.a (mon.0) 483 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch 2026-03-09T20:22:50.867 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[23235]: debug 2026-03-09T20:22:50.600+0000 7f27c8e46140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T20:22:50.867 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[23235]: debug 2026-03-09T20:22:50.636+0000 7f27c8e46140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T20:22:50.868 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:50 vm04 bash[23235]: debug 2026-03-09T20:22:50.744+0000 7f27c8e46140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T20:22:50.907 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20968]: debug 2026-03-09T20:22:50.591+0000 7fc45fd9f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member 2026-03-09T20:22:50.907 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20968]: debug 2026-03-09T20:22:50.627+0000 7fc45fd9f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member 2026-03-09T20:22:50.907 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:50 vm03 bash[20968]: debug 2026-03-09T20:22:50.739+0000 7fc45fd9f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member 2026-03-09T20:22:51.367 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[23235]: debug 2026-03-09T20:22:51.004+0000 7f27c8e46140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T20:22:51.407 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20968]: debug 2026-03-09T20:22:51.003+0000 7fc45fd9f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member 2026-03-09T20:22:51.727 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[22793]: audit 2026-03-09T20:22:50.493860+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T20:22:51.727 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[22793]: audit 2026-03-09T20:22:50.493860+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T20:22:51.727 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[22793]: cluster 2026-03-09T20:22:50.499602+0000 mon.a (mon.0) 485 : cluster [DBG] mgrmap e15: a(active, since 3m), standbys: b 2026-03-09T20:22:51.727 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[22793]: cluster 2026-03-09T20:22:50.499602+0000 mon.a (mon.0) 485 : cluster [DBG] mgrmap e15: a(active, since 3m), standbys: b 2026-03-09T20:22:51.727 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[23235]: debug 2026-03-09T20:22:51.408+0000 7f27c8e46140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T20:22:51.727 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[23235]: debug 2026-03-09T20:22:51.488+0000 7f27c8e46140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T20:22:51.727 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[23235]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T20:22:51.727 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[23235]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T20:22:51.727 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[23235]: from numpy import show_config as show_numpy_config 2026-03-09T20:22:51.727 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[23235]: debug 2026-03-09T20:22:51.600+0000 7f27c8e46140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T20:22:51.741 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20708]: audit 2026-03-09T20:22:50.493860+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T20:22:51.741 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20708]: audit 2026-03-09T20:22:50.493860+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T20:22:51.741 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20708]: cluster 2026-03-09T20:22:50.499602+0000 mon.a (mon.0) 485 : cluster [DBG] mgrmap e15: a(active, since 3m), standbys: b 2026-03-09T20:22:51.742 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20708]: cluster 2026-03-09T20:22:50.499602+0000 mon.a (mon.0) 485 : cluster [DBG] mgrmap e15: a(active, since 3m), standbys: b 2026-03-09T20:22:51.742 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20968]: debug 2026-03-09T20:22:51.415+0000 7fc45fd9f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member 2026-03-09T20:22:51.742 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20968]: debug 2026-03-09T20:22:51.503+0000 7fc45fd9f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member 2026-03-09T20:22:51.742 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20968]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode. 2026-03-09T20:22:51.742 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20968]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve. 2026-03-09T20:22:51.742 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20968]: from numpy import show_config as show_numpy_config 2026-03-09T20:22:51.742 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20968]: debug 2026-03-09T20:22:51.615+0000 7fc45fd9f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member 2026-03-09T20:22:51.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:51 vm08 bash[23232]: audit 2026-03-09T20:22:50.493860+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T20:22:51.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:51 vm08 bash[23232]: audit 2026-03-09T20:22:50.493860+0000 mon.a (mon.0) 484 : audit [INF] from='mgr.14150 192.168.123.103:0/3014988572' entity='mgr.a' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished 2026-03-09T20:22:51.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:51 vm08 bash[23232]: cluster 2026-03-09T20:22:50.499602+0000 mon.a (mon.0) 485 : cluster [DBG] mgrmap e15: a(active, since 3m), standbys: b 2026-03-09T20:22:51.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:51 vm08 bash[23232]: cluster 2026-03-09T20:22:50.499602+0000 mon.a (mon.0) 485 : cluster [DBG] mgrmap e15: a(active, since 3m), standbys: b 2026-03-09T20:22:52.117 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[23235]: debug 2026-03-09T20:22:51.724+0000 7f27c8e46140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T20:22:52.117 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[23235]: debug 2026-03-09T20:22:51.756+0000 7f27c8e46140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T20:22:52.118 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[23235]: debug 2026-03-09T20:22:51.792+0000 7f27c8e46140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T20:22:52.118 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[23235]: debug 2026-03-09T20:22:51.828+0000 7f27c8e46140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T20:22:52.118 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:51 vm04 bash[23235]: debug 2026-03-09T20:22:51.872+0000 7f27c8e46140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T20:22:52.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20968]: debug 2026-03-09T20:22:51.739+0000 7fc45fd9f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member 2026-03-09T20:22:52.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20968]: debug 2026-03-09T20:22:51.771+0000 7fc45fd9f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member 2026-03-09T20:22:52.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20968]: debug 2026-03-09T20:22:51.803+0000 7fc45fd9f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member 2026-03-09T20:22:52.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20968]: debug 2026-03-09T20:22:51.843+0000 7fc45fd9f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member 2026-03-09T20:22:52.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:51 vm03 bash[20968]: debug 2026-03-09T20:22:51.887+0000 7fc45fd9f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member 2026-03-09T20:22:52.543 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:52 vm04 bash[23235]: debug 2026-03-09T20:22:52.260+0000 7f27c8e46140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T20:22:52.543 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:52 vm04 bash[23235]: debug 2026-03-09T20:22:52.292+0000 7f27c8e46140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T20:22:52.543 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:52 vm04 bash[23235]: debug 2026-03-09T20:22:52.324+0000 7f27c8e46140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T20:22:52.543 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:52 vm04 bash[23235]: debug 2026-03-09T20:22:52.452+0000 7f27c8e46140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T20:22:52.543 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:52 vm04 bash[23235]: debug 2026-03-09T20:22:52.500+0000 7f27c8e46140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T20:22:52.565 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:52 vm03 bash[20968]: debug 2026-03-09T20:22:52.275+0000 7fc45fd9f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member 2026-03-09T20:22:52.565 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:52 vm03 bash[20968]: debug 2026-03-09T20:22:52.307+0000 7fc45fd9f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member 2026-03-09T20:22:52.565 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:52 vm03 bash[20968]: debug 2026-03-09T20:22:52.343+0000 7fc45fd9f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member 2026-03-09T20:22:52.565 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:52 vm03 bash[20968]: debug 2026-03-09T20:22:52.479+0000 7fc45fd9f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member 2026-03-09T20:22:52.565 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:52 vm03 bash[20968]: debug 2026-03-09T20:22:52.523+0000 7fc45fd9f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member 2026-03-09T20:22:52.795 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:52 vm04 bash[23235]: debug 2026-03-09T20:22:52.540+0000 7f27c8e46140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T20:22:52.795 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:52 vm04 bash[23235]: debug 2026-03-09T20:22:52.648+0000 7f27c8e46140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:22:52.835 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:52 vm03 bash[20968]: debug 2026-03-09T20:22:52.559+0000 7fc45fd9f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member 2026-03-09T20:22:52.835 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:52 vm03 bash[20968]: debug 2026-03-09T20:22:52.675+0000 7fc45fd9f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:22:53.117 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:52 vm04 bash[23235]: debug 2026-03-09T20:22:52.792+0000 7f27c8e46140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T20:22:53.117 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:52 vm04 bash[23235]: debug 2026-03-09T20:22:52.952+0000 7f27c8e46140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T20:22:53.117 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:52 vm04 bash[23235]: debug 2026-03-09T20:22:52.984+0000 7f27c8e46140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T20:22:53.117 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[23235]: debug 2026-03-09T20:22:53.020+0000 7f27c8e46140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T20:22:53.156 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:52 vm03 bash[20968]: debug 2026-03-09T20:22:52.831+0000 7fc45fd9f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member 2026-03-09T20:22:53.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20968]: debug 2026-03-09T20:22:52.995+0000 7fc45fd9f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member 2026-03-09T20:22:53.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20968]: debug 2026-03-09T20:22:53.027+0000 7fc45fd9f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member 2026-03-09T20:22:53.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20968]: debug 2026-03-09T20:22:53.063+0000 7fc45fd9f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member 2026-03-09T20:22:53.426 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[23235]: debug 2026-03-09T20:22:53.152+0000 7f27c8e46140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:22:53.426 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[23235]: debug 2026-03-09T20:22:53.356+0000 7f27c8e46140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T20:22:53.426 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[23235]: [09/Mar/2026:20:22:53] ENGINE Bus STARTING 2026-03-09T20:22:53.426 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[23235]: CherryPy Checker: 2026-03-09T20:22:53.426 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[23235]: The Application mounted at '' has an empty config. 2026-03-09T20:22:53.482 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20708]: cluster 2026-03-09T20:22:53.360204+0000 mon.a (mon.0) 486 : cluster [DBG] Standby manager daemon b restarted 2026-03-09T20:22:53.482 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20708]: cluster 2026-03-09T20:22:53.360204+0000 mon.a (mon.0) 486 : cluster [DBG] Standby manager daemon b restarted 2026-03-09T20:22:53.482 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20708]: cluster 2026-03-09T20:22:53.360284+0000 mon.a (mon.0) 487 : cluster [DBG] Standby manager daemon b started 2026-03-09T20:22:53.482 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20708]: cluster 2026-03-09T20:22:53.360284+0000 mon.a (mon.0) 487 : cluster [DBG] Standby manager daemon b started 2026-03-09T20:22:53.482 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20708]: audit 2026-03-09T20:22:53.361808+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T20:22:53.482 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20708]: audit 2026-03-09T20:22:53.361808+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T20:22:53.482 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20708]: audit 2026-03-09T20:22:53.362267+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:22:53.483 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20708]: audit 2026-03-09T20:22:53.362267+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:22:53.483 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20708]: audit 2026-03-09T20:22:53.362888+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T20:22:53.483 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20708]: audit 2026-03-09T20:22:53.362888+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T20:22:53.483 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20708]: audit 2026-03-09T20:22:53.363353+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:22:53.483 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20708]: audit 2026-03-09T20:22:53.363353+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:22:53.483 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20968]: debug 2026-03-09T20:22:53.203+0000 7fc45fd9f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member 2026-03-09T20:22:53.483 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20968]: debug 2026-03-09T20:22:53.407+0000 7fc45fd9f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member 2026-03-09T20:22:53.545 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:53 vm08 bash[23232]: cluster 2026-03-09T20:22:53.360204+0000 mon.a (mon.0) 486 : cluster [DBG] Standby manager daemon b restarted 2026-03-09T20:22:53.545 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:53 vm08 bash[23232]: cluster 2026-03-09T20:22:53.360204+0000 mon.a (mon.0) 486 : cluster [DBG] Standby manager daemon b restarted 2026-03-09T20:22:53.545 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:53 vm08 bash[23232]: cluster 2026-03-09T20:22:53.360284+0000 mon.a (mon.0) 487 : cluster [DBG] Standby manager daemon b started 2026-03-09T20:22:53.545 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:53 vm08 bash[23232]: cluster 2026-03-09T20:22:53.360284+0000 mon.a (mon.0) 487 : cluster [DBG] Standby manager daemon b started 2026-03-09T20:22:53.545 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:53 vm08 bash[23232]: audit 2026-03-09T20:22:53.361808+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T20:22:53.545 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:53 vm08 bash[23232]: audit 2026-03-09T20:22:53.361808+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T20:22:53.545 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:53 vm08 bash[23232]: audit 2026-03-09T20:22:53.362267+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:22:53.545 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:53 vm08 bash[23232]: audit 2026-03-09T20:22:53.362267+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:22:53.545 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:53 vm08 bash[23232]: audit 2026-03-09T20:22:53.362888+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T20:22:53.545 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:53 vm08 bash[23232]: audit 2026-03-09T20:22:53.362888+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T20:22:53.545 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:53 vm08 bash[23232]: audit 2026-03-09T20:22:53.363353+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:22:53.545 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:53 vm08 bash[23232]: audit 2026-03-09T20:22:53.363353+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:22:53.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[22793]: cluster 2026-03-09T20:22:53.360204+0000 mon.a (mon.0) 486 : cluster [DBG] Standby manager daemon b restarted 2026-03-09T20:22:53.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[22793]: cluster 2026-03-09T20:22:53.360204+0000 mon.a (mon.0) 486 : cluster [DBG] Standby manager daemon b restarted 2026-03-09T20:22:53.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[22793]: cluster 2026-03-09T20:22:53.360284+0000 mon.a (mon.0) 487 : cluster [DBG] Standby manager daemon b started 2026-03-09T20:22:53.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[22793]: cluster 2026-03-09T20:22:53.360284+0000 mon.a (mon.0) 487 : cluster [DBG] Standby manager daemon b started 2026-03-09T20:22:53.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[22793]: audit 2026-03-09T20:22:53.361808+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T20:22:53.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[22793]: audit 2026-03-09T20:22:53.361808+0000 mon.b (mon.2) 8 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/crt"}]: dispatch 2026-03-09T20:22:53.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[22793]: audit 2026-03-09T20:22:53.362267+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:22:53.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[22793]: audit 2026-03-09T20:22:53.362267+0000 mon.b (mon.2) 9 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/crt"}]: dispatch 2026-03-09T20:22:53.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[22793]: audit 2026-03-09T20:22:53.362888+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T20:22:53.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[22793]: audit 2026-03-09T20:22:53.362888+0000 mon.b (mon.2) 10 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/b/key"}]: dispatch 2026-03-09T20:22:53.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[22793]: audit 2026-03-09T20:22:53.363353+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:22:53.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[22793]: audit 2026-03-09T20:22:53.363353+0000 mon.b (mon.2) 11 : audit [DBG] from='mgr.? 192.168.123.104:0/1435267859' entity='mgr.b' cmd=[{"prefix": "config-key get", "key": "mgr/dashboard/key"}]: dispatch 2026-03-09T20:22:53.868 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[23235]: [09/Mar/2026:20:22:53] ENGINE Serving on http://:::9283 2026-03-09T20:22:53.868 INFO:journalctl@ceph.mgr.b.vm04.stdout:Mar 09 20:22:53 vm04 bash[23235]: [09/Mar/2026:20:22:53] ENGINE Bus STARTED 2026-03-09T20:22:53.907 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20968]: [09/Mar/2026:20:22:53] ENGINE Bus STARTING 2026-03-09T20:22:53.907 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20968]: CherryPy Checker: 2026-03-09T20:22:53.907 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20968]: The Application mounted at '' has an empty config. 2026-03-09T20:22:53.907 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20968]: [09/Mar/2026:20:22:53] ENGINE Serving on http://:::9283 2026-03-09T20:22:53.907 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:53 vm03 bash[20968]: [09/Mar/2026:20:22:53] ENGINE Bus STARTED 2026-03-09T20:22:54.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: cluster 2026-03-09T20:22:53.419081+0000 mon.a (mon.0) 488 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:22:54.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: cluster 2026-03-09T20:22:53.419081+0000 mon.a (mon.0) 488 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:22:54.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: cluster 2026-03-09T20:22:53.419278+0000 mon.a (mon.0) 489 : cluster [INF] Activating manager daemon a 2026-03-09T20:22:54.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: cluster 2026-03-09T20:22:53.419278+0000 mon.a (mon.0) 489 : cluster [INF] Activating manager daemon a 2026-03-09T20:22:54.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: cluster 2026-03-09T20:22:53.420476+0000 mon.a (mon.0) 490 : cluster [DBG] mgrmap e16: a(active, since 4m), standbys: b 2026-03-09T20:22:54.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: cluster 2026-03-09T20:22:53.420476+0000 mon.a (mon.0) 490 : cluster [DBG] mgrmap e16: a(active, since 4m), standbys: b 2026-03-09T20:22:54.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: cluster 2026-03-09T20:22:53.432036+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T20:22:54.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: cluster 2026-03-09T20:22:53.432036+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T20:22:54.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: cluster 2026-03-09T20:22:53.432218+0000 mon.a (mon.0) 492 : cluster [DBG] mgrmap e17: a(active, starting, since 0.0130171s), standbys: b 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: cluster 2026-03-09T20:22:53.432218+0000 mon.a (mon.0) 492 : cluster [DBG] mgrmap e17: a(active, starting, since 0.0130171s), standbys: b 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.433818+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.433818+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.433920+0000 mon.b (mon.2) 13 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.433920+0000 mon.b (mon.2) 13 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.434002+0000 mon.b (mon.2) 14 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.434002+0000 mon.b (mon.2) 14 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.434599+0000 mon.b (mon.2) 15 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.434599+0000 mon.b (mon.2) 15 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.434700+0000 mon.b (mon.2) 16 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.434700+0000 mon.b (mon.2) 16 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.434825+0000 mon.b (mon.2) 17 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.434825+0000 mon.b (mon.2) 17 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.434944+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.434944+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.435054+0000 mon.b (mon.2) 19 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.435054+0000 mon.b (mon.2) 19 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.435971+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.435971+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.436056+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.436056+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.436213+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.436213+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: cluster 2026-03-09T20:22:53.443275+0000 mon.a (mon.0) 493 : cluster [INF] Manager daemon a is now available 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: cluster 2026-03-09T20:22:53.443275+0000 mon.a (mon.0) 493 : cluster [INF] Manager daemon a is now available 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.458502+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.458502+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.469075+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.469075+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.481224+0000 mon.b (mon.2) 24 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.481224+0000 mon.b (mon.2) 24 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.481624+0000 mon.b (mon.2) 25 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.481624+0000 mon.b (mon.2) 25 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.482181+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.482181+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.506380+0000 mon.b (mon.2) 26 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.506380+0000 mon.b (mon.2) 26 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.507008+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:22:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:54 vm08 bash[23232]: audit 2026-03-09T20:22:53.507008+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: cluster 2026-03-09T20:22:53.419081+0000 mon.a (mon.0) 488 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: cluster 2026-03-09T20:22:53.419081+0000 mon.a (mon.0) 488 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: cluster 2026-03-09T20:22:53.419278+0000 mon.a (mon.0) 489 : cluster [INF] Activating manager daemon a 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: cluster 2026-03-09T20:22:53.419278+0000 mon.a (mon.0) 489 : cluster [INF] Activating manager daemon a 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: cluster 2026-03-09T20:22:53.420476+0000 mon.a (mon.0) 490 : cluster [DBG] mgrmap e16: a(active, since 4m), standbys: b 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: cluster 2026-03-09T20:22:53.420476+0000 mon.a (mon.0) 490 : cluster [DBG] mgrmap e16: a(active, since 4m), standbys: b 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: cluster 2026-03-09T20:22:53.432036+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: cluster 2026-03-09T20:22:53.432036+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: cluster 2026-03-09T20:22:53.432218+0000 mon.a (mon.0) 492 : cluster [DBG] mgrmap e17: a(active, starting, since 0.0130171s), standbys: b 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: cluster 2026-03-09T20:22:53.432218+0000 mon.a (mon.0) 492 : cluster [DBG] mgrmap e17: a(active, starting, since 0.0130171s), standbys: b 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.433818+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.433818+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.433920+0000 mon.b (mon.2) 13 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.433920+0000 mon.b (mon.2) 13 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.434002+0000 mon.b (mon.2) 14 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.434002+0000 mon.b (mon.2) 14 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.434599+0000 mon.b (mon.2) 15 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.434599+0000 mon.b (mon.2) 15 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.434700+0000 mon.b (mon.2) 16 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.434700+0000 mon.b (mon.2) 16 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.434825+0000 mon.b (mon.2) 17 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.434825+0000 mon.b (mon.2) 17 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.434944+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.434944+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.435054+0000 mon.b (mon.2) 19 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.435054+0000 mon.b (mon.2) 19 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.435971+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.435971+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.436056+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.436056+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.436213+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.436213+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: cluster 2026-03-09T20:22:53.443275+0000 mon.a (mon.0) 493 : cluster [INF] Manager daemon a is now available 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: cluster 2026-03-09T20:22:53.443275+0000 mon.a (mon.0) 493 : cluster [INF] Manager daemon a is now available 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.458502+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.458502+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.469075+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.469075+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.481224+0000 mon.b (mon.2) 24 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:54.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.481224+0000 mon.b (mon.2) 24 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:54.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.481624+0000 mon.b (mon.2) 25 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:22:54.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.481624+0000 mon.b (mon.2) 25 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:22:54.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.482181+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:22:54.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.482181+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:22:54.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.506380+0000 mon.b (mon.2) 26 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:22:54.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.506380+0000 mon.b (mon.2) 26 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:22:54.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.507008+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:22:54.869 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:54 vm04 bash[22793]: audit 2026-03-09T20:22:53.507008+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: cluster 2026-03-09T20:22:53.419081+0000 mon.a (mon.0) 488 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: cluster 2026-03-09T20:22:53.419081+0000 mon.a (mon.0) 488 : cluster [INF] Active manager daemon a restarted 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: cluster 2026-03-09T20:22:53.419278+0000 mon.a (mon.0) 489 : cluster [INF] Activating manager daemon a 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: cluster 2026-03-09T20:22:53.419278+0000 mon.a (mon.0) 489 : cluster [INF] Activating manager daemon a 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: cluster 2026-03-09T20:22:53.420476+0000 mon.a (mon.0) 490 : cluster [DBG] mgrmap e16: a(active, since 4m), standbys: b 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: cluster 2026-03-09T20:22:53.420476+0000 mon.a (mon.0) 490 : cluster [DBG] mgrmap e16: a(active, since 4m), standbys: b 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: cluster 2026-03-09T20:22:53.432036+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: cluster 2026-03-09T20:22:53.432036+0000 mon.a (mon.0) 491 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: cluster 2026-03-09T20:22:53.432218+0000 mon.a (mon.0) 492 : cluster [DBG] mgrmap e17: a(active, starting, since 0.0130171s), standbys: b 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: cluster 2026-03-09T20:22:53.432218+0000 mon.a (mon.0) 492 : cluster [DBG] mgrmap e17: a(active, starting, since 0.0130171s), standbys: b 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.433818+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.433818+0000 mon.b (mon.2) 12 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "a"}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.433920+0000 mon.b (mon.2) 13 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.433920+0000 mon.b (mon.2) 13 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "b"}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.434002+0000 mon.b (mon.2) 14 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.434002+0000 mon.b (mon.2) 14 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata", "id": "c"}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.434599+0000 mon.b (mon.2) 15 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.434599+0000 mon.b (mon.2) 15 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "a", "id": "a"}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.434700+0000 mon.b (mon.2) 16 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.434700+0000 mon.b (mon.2) 16 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mgr metadata", "who": "b", "id": "b"}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.434825+0000 mon.b (mon.2) 17 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.434825+0000 mon.b (mon.2) 17 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.434944+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.434944+0000 mon.b (mon.2) 18 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.435054+0000 mon.b (mon.2) 19 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.435054+0000 mon.b (mon.2) 19 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.435971+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.435971+0000 mon.b (mon.2) 20 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mds metadata"}]: dispatch 2026-03-09T20:22:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.436056+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.436056+0000 mon.b (mon.2) 21 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd metadata"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.436213+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.436213+0000 mon.b (mon.2) 22 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "mon metadata"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: cluster 2026-03-09T20:22:53.443275+0000 mon.a (mon.0) 493 : cluster [INF] Manager daemon a is now available 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: cluster 2026-03-09T20:22:53.443275+0000 mon.a (mon.0) 493 : cluster [INF] Manager daemon a is now available 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.458502+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.458502+0000 mon.a (mon.0) 494 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.469075+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.469075+0000 mon.b (mon.2) 23 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.481224+0000 mon.b (mon.2) 24 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.481224+0000 mon.b (mon.2) 24 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.481624+0000 mon.b (mon.2) 25 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.481624+0000 mon.b (mon.2) 25 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.482181+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.482181+0000 mon.a (mon.0) 495 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/mirror_snapshot_schedule"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.506380+0000 mon.b (mon.2) 26 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.506380+0000 mon.b (mon.2) 26 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.507008+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:22:54.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:54 vm03 bash[20708]: audit 2026-03-09T20:22:53.507008+0000 mon.a (mon.0) 496 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/a/trash_purge_schedule"}]: dispatch 2026-03-09T20:22:55.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:55 vm08 bash[23232]: cluster 2026-03-09T20:22:54.433405+0000 mon.a (mon.0) 497 : cluster [DBG] mgrmap e18: a(active, since 1.0142s), standbys: b 2026-03-09T20:22:55.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:55 vm08 bash[23232]: cluster 2026-03-09T20:22:54.433405+0000 mon.a (mon.0) 497 : cluster [DBG] mgrmap e18: a(active, since 1.0142s), standbys: b 2026-03-09T20:22:55.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:55 vm08 bash[23232]: cephadm 2026-03-09T20:22:54.512046+0000 mgr.a (mgr.14406) 2 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Bus STARTING 2026-03-09T20:22:55.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:55 vm08 bash[23232]: cephadm 2026-03-09T20:22:54.512046+0000 mgr.a (mgr.14406) 2 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Bus STARTING 2026-03-09T20:22:55.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:55 vm04 bash[22793]: cluster 2026-03-09T20:22:54.433405+0000 mon.a (mon.0) 497 : cluster [DBG] mgrmap e18: a(active, since 1.0142s), standbys: b 2026-03-09T20:22:55.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:55 vm04 bash[22793]: cluster 2026-03-09T20:22:54.433405+0000 mon.a (mon.0) 497 : cluster [DBG] mgrmap e18: a(active, since 1.0142s), standbys: b 2026-03-09T20:22:55.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:55 vm04 bash[22793]: cephadm 2026-03-09T20:22:54.512046+0000 mgr.a (mgr.14406) 2 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Bus STARTING 2026-03-09T20:22:55.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:55 vm04 bash[22793]: cephadm 2026-03-09T20:22:54.512046+0000 mgr.a (mgr.14406) 2 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Bus STARTING 2026-03-09T20:22:55.884 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:55 vm03 bash[20708]: cluster 2026-03-09T20:22:54.433405+0000 mon.a (mon.0) 497 : cluster [DBG] mgrmap e18: a(active, since 1.0142s), standbys: b 2026-03-09T20:22:55.884 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:55 vm03 bash[20708]: cluster 2026-03-09T20:22:54.433405+0000 mon.a (mon.0) 497 : cluster [DBG] mgrmap e18: a(active, since 1.0142s), standbys: b 2026-03-09T20:22:55.884 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:55 vm03 bash[20708]: cephadm 2026-03-09T20:22:54.512046+0000 mgr.a (mgr.14406) 2 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Bus STARTING 2026-03-09T20:22:55.884 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:55 vm03 bash[20708]: cephadm 2026-03-09T20:22:54.512046+0000 mgr.a (mgr.14406) 2 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Bus STARTING 2026-03-09T20:22:56.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:22:55 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:22:55] "GET /metrics HTTP/1.1" 200 20068 "" "Prometheus/2.51.0" 2026-03-09T20:22:56.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:56 vm08 bash[23232]: cephadm 2026-03-09T20:22:54.613315+0000 mgr.a (mgr.14406) 3 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:22:56.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:56 vm08 bash[23232]: cephadm 2026-03-09T20:22:54.613315+0000 mgr.a (mgr.14406) 3 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:22:56.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:56 vm08 bash[23232]: cephadm 2026-03-09T20:22:54.721764+0000 mgr.a (mgr.14406) 4 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:22:56.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:56 vm08 bash[23232]: cephadm 2026-03-09T20:22:54.721764+0000 mgr.a (mgr.14406) 4 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:22:56.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:56 vm08 bash[23232]: cephadm 2026-03-09T20:22:54.721803+0000 mgr.a (mgr.14406) 5 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Bus STARTED 2026-03-09T20:22:56.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:56 vm08 bash[23232]: cephadm 2026-03-09T20:22:54.721803+0000 mgr.a (mgr.14406) 5 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Bus STARTED 2026-03-09T20:22:56.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:56 vm08 bash[23232]: cephadm 2026-03-09T20:22:54.722193+0000 mgr.a (mgr.14406) 6 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Client ('192.168.123.103', 46430) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:22:56.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:56 vm08 bash[23232]: cephadm 2026-03-09T20:22:54.722193+0000 mgr.a (mgr.14406) 6 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Client ('192.168.123.103', 46430) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:22:56.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:56 vm08 bash[23232]: cluster 2026-03-09T20:22:55.436313+0000 mgr.a (mgr.14406) 7 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:56.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:56 vm08 bash[23232]: cluster 2026-03-09T20:22:55.436313+0000 mgr.a (mgr.14406) 7 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:56.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:56 vm08 bash[23232]: cluster 2026-03-09T20:22:55.454032+0000 mon.a (mon.0) 498 : cluster [DBG] mgrmap e19: a(active, since 2s), standbys: b 2026-03-09T20:22:56.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:56 vm08 bash[23232]: cluster 2026-03-09T20:22:55.454032+0000 mon.a (mon.0) 498 : cluster [DBG] mgrmap e19: a(active, since 2s), standbys: b 2026-03-09T20:22:56.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:56 vm04 bash[22793]: cephadm 2026-03-09T20:22:54.613315+0000 mgr.a (mgr.14406) 3 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:22:56.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:56 vm04 bash[22793]: cephadm 2026-03-09T20:22:54.613315+0000 mgr.a (mgr.14406) 3 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:22:56.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:56 vm04 bash[22793]: cephadm 2026-03-09T20:22:54.721764+0000 mgr.a (mgr.14406) 4 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:22:56.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:56 vm04 bash[22793]: cephadm 2026-03-09T20:22:54.721764+0000 mgr.a (mgr.14406) 4 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:22:56.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:56 vm04 bash[22793]: cephadm 2026-03-09T20:22:54.721803+0000 mgr.a (mgr.14406) 5 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Bus STARTED 2026-03-09T20:22:56.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:56 vm04 bash[22793]: cephadm 2026-03-09T20:22:54.721803+0000 mgr.a (mgr.14406) 5 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Bus STARTED 2026-03-09T20:22:56.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:56 vm04 bash[22793]: cephadm 2026-03-09T20:22:54.722193+0000 mgr.a (mgr.14406) 6 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Client ('192.168.123.103', 46430) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:22:56.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:56 vm04 bash[22793]: cephadm 2026-03-09T20:22:54.722193+0000 mgr.a (mgr.14406) 6 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Client ('192.168.123.103', 46430) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:22:56.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:56 vm04 bash[22793]: cluster 2026-03-09T20:22:55.436313+0000 mgr.a (mgr.14406) 7 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:56.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:56 vm04 bash[22793]: cluster 2026-03-09T20:22:55.436313+0000 mgr.a (mgr.14406) 7 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:56.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:56 vm04 bash[22793]: cluster 2026-03-09T20:22:55.454032+0000 mon.a (mon.0) 498 : cluster [DBG] mgrmap e19: a(active, since 2s), standbys: b 2026-03-09T20:22:56.868 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:56 vm04 bash[22793]: cluster 2026-03-09T20:22:55.454032+0000 mon.a (mon.0) 498 : cluster [DBG] mgrmap e19: a(active, since 2s), standbys: b 2026-03-09T20:22:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:56 vm03 bash[20708]: cephadm 2026-03-09T20:22:54.613315+0000 mgr.a (mgr.14406) 3 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:22:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:56 vm03 bash[20708]: cephadm 2026-03-09T20:22:54.613315+0000 mgr.a (mgr.14406) 3 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Serving on http://192.168.123.103:8765 2026-03-09T20:22:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:56 vm03 bash[20708]: cephadm 2026-03-09T20:22:54.721764+0000 mgr.a (mgr.14406) 4 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:22:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:56 vm03 bash[20708]: cephadm 2026-03-09T20:22:54.721764+0000 mgr.a (mgr.14406) 4 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Serving on https://192.168.123.103:7150 2026-03-09T20:22:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:56 vm03 bash[20708]: cephadm 2026-03-09T20:22:54.721803+0000 mgr.a (mgr.14406) 5 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Bus STARTED 2026-03-09T20:22:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:56 vm03 bash[20708]: cephadm 2026-03-09T20:22:54.721803+0000 mgr.a (mgr.14406) 5 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Bus STARTED 2026-03-09T20:22:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:56 vm03 bash[20708]: cephadm 2026-03-09T20:22:54.722193+0000 mgr.a (mgr.14406) 6 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Client ('192.168.123.103', 46430) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:22:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:56 vm03 bash[20708]: cephadm 2026-03-09T20:22:54.722193+0000 mgr.a (mgr.14406) 6 : cephadm [INF] [09/Mar/2026:20:22:54] ENGINE Client ('192.168.123.103', 46430) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)') 2026-03-09T20:22:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:56 vm03 bash[20708]: cluster 2026-03-09T20:22:55.436313+0000 mgr.a (mgr.14406) 7 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:56 vm03 bash[20708]: cluster 2026-03-09T20:22:55.436313+0000 mgr.a (mgr.14406) 7 : cluster [DBG] pgmap v4: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:56 vm03 bash[20708]: cluster 2026-03-09T20:22:55.454032+0000 mon.a (mon.0) 498 : cluster [DBG] mgrmap e19: a(active, since 2s), standbys: b 2026-03-09T20:22:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:56 vm03 bash[20708]: cluster 2026-03-09T20:22:55.454032+0000 mon.a (mon.0) 498 : cluster [DBG] mgrmap e19: a(active, since 2s), standbys: b 2026-03-09T20:22:58.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:58 vm08 bash[23232]: cluster 2026-03-09T20:22:57.436507+0000 mgr.a (mgr.14406) 8 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:58.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:58 vm08 bash[23232]: cluster 2026-03-09T20:22:57.436507+0000 mgr.a (mgr.14406) 8 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:58.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:58 vm08 bash[23232]: cluster 2026-03-09T20:22:57.461952+0000 mon.a (mon.0) 499 : cluster [DBG] mgrmap e20: a(active, since 4s), standbys: b 2026-03-09T20:22:58.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:58 vm08 bash[23232]: cluster 2026-03-09T20:22:57.461952+0000 mon.a (mon.0) 499 : cluster [DBG] mgrmap e20: a(active, since 4s), standbys: b 2026-03-09T20:22:58.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:58 vm04 bash[22793]: cluster 2026-03-09T20:22:57.436507+0000 mgr.a (mgr.14406) 8 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:58.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:58 vm04 bash[22793]: cluster 2026-03-09T20:22:57.436507+0000 mgr.a (mgr.14406) 8 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:58.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:58 vm04 bash[22793]: cluster 2026-03-09T20:22:57.461952+0000 mon.a (mon.0) 499 : cluster [DBG] mgrmap e20: a(active, since 4s), standbys: b 2026-03-09T20:22:58.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:58 vm04 bash[22793]: cluster 2026-03-09T20:22:57.461952+0000 mon.a (mon.0) 499 : cluster [DBG] mgrmap e20: a(active, since 4s), standbys: b 2026-03-09T20:22:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:58 vm03 bash[20708]: cluster 2026-03-09T20:22:57.436507+0000 mgr.a (mgr.14406) 8 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:58 vm03 bash[20708]: cluster 2026-03-09T20:22:57.436507+0000 mgr.a (mgr.14406) 8 : cluster [DBG] pgmap v5: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:22:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:58 vm03 bash[20708]: cluster 2026-03-09T20:22:57.461952+0000 mon.a (mon.0) 499 : cluster [DBG] mgrmap e20: a(active, since 4s), standbys: b 2026-03-09T20:22:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:58 vm03 bash[20708]: cluster 2026-03-09T20:22:57.461952+0000 mon.a (mon.0) 499 : cluster [DBG] mgrmap e20: a(active, since 4s), standbys: b 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:58.748278+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:58.748278+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:58.755018+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:58.755018+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:58.787879+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:58.787879+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:58.793700+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:58.793700+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:58.961531+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:58.961531+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:58.966912+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:58.966912+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.344199+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.344199+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.348143+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.348143+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.352386+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.352386+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.353835+0000 mon.b (mon.2) 27 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.353835+0000 mon.b (mon.2) 27 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.354230+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.354230+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.356260+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.356260+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.357151+0000 mon.b (mon.2) 28 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.357151+0000 mon.b (mon.2) 28 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.357780+0000 mgr.a (mgr.14406) 9 : cephadm [INF] Adjusting osd_memory_target on vm08 to 2503M 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.357780+0000 mgr.a (mgr.14406) 9 : cephadm [INF] Adjusting osd_memory_target on vm08 to 2503M 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.357974+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.357974+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.360701+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.360701+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: cluster 2026-03-09T20:22:59.436679+0000 mgr.a (mgr.14406) 10 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: cluster 2026-03-09T20:22:59.436679+0000 mgr.a (mgr.14406) 10 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.527334+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.527334+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.532464+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.532464+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.533552+0000 mon.b (mon.2) 29 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.533552+0000 mon.b (mon.2) 29 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.533832+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.533832+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.534407+0000 mon.b (mon.2) 30 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.534407+0000 mon.b (mon.2) 30 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.534847+0000 mon.b (mon.2) 31 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.534847+0000 mon.b (mon.2) 31 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.535633+0000 mgr.a (mgr.14406) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.535633+0000 mgr.a (mgr.14406) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.535731+0000 mgr.a (mgr.14406) 12 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.535731+0000 mgr.a (mgr.14406) 12 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.535792+0000 mgr.a (mgr.14406) 13 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.535792+0000 mgr.a (mgr.14406) 13 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.716440+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.716440+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.720822+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.720822+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.725739+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.725739+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.731055+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.731055+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.735111+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.735111+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.738989+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.738989+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.742797+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:22:59 vm08 bash[23232]: audit 2026-03-09T20:22:59.742797+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:58.748278+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:58.748278+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:58.755018+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:58.755018+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:58.787879+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:58.787879+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:58.793700+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:58.793700+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:58.961531+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:58.961531+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:58.966912+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:58.966912+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.344199+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.344199+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.348143+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.348143+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.352386+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.352386+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.353835+0000 mon.b (mon.2) 27 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.353835+0000 mon.b (mon.2) 27 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.354230+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.354230+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.356260+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.356260+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.357151+0000 mon.b (mon.2) 28 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.357151+0000 mon.b (mon.2) 28 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.357780+0000 mgr.a (mgr.14406) 9 : cephadm [INF] Adjusting osd_memory_target on vm08 to 2503M 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.357780+0000 mgr.a (mgr.14406) 9 : cephadm [INF] Adjusting osd_memory_target on vm08 to 2503M 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.357974+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.357974+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.360701+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.360701+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: cluster 2026-03-09T20:22:59.436679+0000 mgr.a (mgr.14406) 10 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:00.060 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: cluster 2026-03-09T20:22:59.436679+0000 mgr.a (mgr.14406) 10 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.527334+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.527334+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.532464+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.532464+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.533552+0000 mon.b (mon.2) 29 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.533552+0000 mon.b (mon.2) 29 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.533832+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.533832+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.534407+0000 mon.b (mon.2) 30 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.534407+0000 mon.b (mon.2) 30 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.534847+0000 mon.b (mon.2) 31 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.534847+0000 mon.b (mon.2) 31 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.535633+0000 mgr.a (mgr.14406) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.535633+0000 mgr.a (mgr.14406) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.535731+0000 mgr.a (mgr.14406) 12 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.535731+0000 mgr.a (mgr.14406) 12 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.535792+0000 mgr.a (mgr.14406) 13 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.535792+0000 mgr.a (mgr.14406) 13 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.716440+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.716440+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.720822+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.720822+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.725739+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.725739+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.731055+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.731055+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.735111+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.735111+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.738989+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.738989+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.742797+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.061 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:22:59 vm03 bash[20708]: audit 2026-03-09T20:22:59.742797+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:58.748278+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:58.748278+0000 mon.a (mon.0) 500 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:58.755018+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:58.755018+0000 mon.a (mon.0) 501 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:58.787879+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:58.787879+0000 mon.a (mon.0) 502 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:58.793700+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:58.793700+0000 mon.a (mon.0) 503 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:58.961531+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:58.961531+0000 mon.a (mon.0) 504 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:58.966912+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:58.966912+0000 mon.a (mon.0) 505 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.344199+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.344199+0000 mon.a (mon.0) 506 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.348143+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.348143+0000 mon.a (mon.0) 507 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.352386+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.352386+0000 mon.a (mon.0) 508 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.353835+0000 mon.b (mon.2) 27 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.353835+0000 mon.b (mon.2) 27 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.354230+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.354230+0000 mon.a (mon.0) 509 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.356260+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.356260+0000 mon.a (mon.0) 510 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.357151+0000 mon.b (mon.2) 28 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.357151+0000 mon.b (mon.2) 28 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.357780+0000 mgr.a (mgr.14406) 9 : cephadm [INF] Adjusting osd_memory_target on vm08 to 2503M 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.357780+0000 mgr.a (mgr.14406) 9 : cephadm [INF] Adjusting osd_memory_target on vm08 to 2503M 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.357974+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.357974+0000 mon.a (mon.0) 511 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm04", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.360701+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.360701+0000 mon.a (mon.0) 512 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: cluster 2026-03-09T20:22:59.436679+0000 mgr.a (mgr.14406) 10 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: cluster 2026-03-09T20:22:59.436679+0000 mgr.a (mgr.14406) 10 : cluster [DBG] pgmap v6: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.527334+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.527334+0000 mon.a (mon.0) 513 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.532464+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.532464+0000 mon.a (mon.0) 514 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.533552+0000 mon.b (mon.2) 29 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.533552+0000 mon.b (mon.2) 29 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.533832+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.533832+0000 mon.a (mon.0) 515 : audit [INF] from='mgr.14406 ' entity='mgr.a' cmd=[{"prefix": "config rm", "who": "osd/host:vm03", "name": "osd_memory_target"}]: dispatch 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.534407+0000 mon.b (mon.2) 30 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.534407+0000 mon.b (mon.2) 30 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.534847+0000 mon.b (mon.2) 31 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.534847+0000 mon.b (mon.2) 31 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.535633+0000 mgr.a (mgr.14406) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.535633+0000 mgr.a (mgr.14406) 11 : cephadm [INF] Updating vm03:/etc/ceph/ceph.conf 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.535731+0000 mgr.a (mgr.14406) 12 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.535731+0000 mgr.a (mgr.14406) 12 : cephadm [INF] Updating vm04:/etc/ceph/ceph.conf 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.535792+0000 mgr.a (mgr.14406) 13 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.535792+0000 mgr.a (mgr.14406) 13 : cephadm [INF] Updating vm08:/etc/ceph/ceph.conf 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.716440+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.716440+0000 mon.a (mon.0) 516 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.720822+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.720822+0000 mon.a (mon.0) 517 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.725739+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.725739+0000 mon.a (mon.0) 518 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.731055+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.731055+0000 mon.a (mon.0) 519 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.735111+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.735111+0000 mon.a (mon.0) 520 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.738989+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.738989+0000 mon.a (mon.0) 521 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.742797+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.119 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:22:59 vm04 bash[22793]: audit 2026-03-09T20:22:59.742797+0000 mon.a (mon.0) 522 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.581061+0000 mgr.a (mgr.14406) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.581061+0000 mgr.a (mgr.14406) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.584549+0000 mgr.a (mgr.14406) 15 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.584549+0000 mgr.a (mgr.14406) 15 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.586531+0000 mgr.a (mgr.14406) 16 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.586531+0000 mgr.a (mgr.14406) 16 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.622844+0000 mgr.a (mgr.14406) 17 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.622844+0000 mgr.a (mgr.14406) 17 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.626069+0000 mgr.a (mgr.14406) 18 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.626069+0000 mgr.a (mgr.14406) 18 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.627747+0000 mgr.a (mgr.14406) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.627747+0000 mgr.a (mgr.14406) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.666033+0000 mgr.a (mgr.14406) 20 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.666033+0000 mgr.a (mgr.14406) 20 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.667213+0000 mgr.a (mgr.14406) 21 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.667213+0000 mgr.a (mgr.14406) 21 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.669421+0000 mgr.a (mgr.14406) 22 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.669421+0000 mgr.a (mgr.14406) 22 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: audit 2026-03-09T20:22:59.748436+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: audit 2026-03-09T20:22:59.748436+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.765982+0000 mgr.a (mgr.14406) 23 : cephadm [INF] Reconfiguring grafana.vm03 (dependencies changed)... 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.765982+0000 mgr.a (mgr.14406) 23 : cephadm [INF] Reconfiguring grafana.vm03 (dependencies changed)... 2026-03-09T20:23:00.875 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.801063+0000 mgr.a (mgr.14406) 24 : cephadm [INF] Reconfiguring daemon grafana.vm03 on vm03 2026-03-09T20:23:00.876 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:22:59.801063+0000 mgr.a (mgr.14406) 24 : cephadm [INF] Reconfiguring daemon grafana.vm03 on vm03 2026-03-09T20:23:00.876 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: audit 2026-03-09T20:23:00.370266+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.876 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: audit 2026-03-09T20:23:00.370266+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.876 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: audit 2026-03-09T20:23:00.374422+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.876 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: audit 2026-03-09T20:23:00.374422+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.876 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:23:00.375893+0000 mgr.a (mgr.14406) 25 : cephadm [INF] Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-09T20:23:00.876 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:23:00.375893+0000 mgr.a (mgr.14406) 25 : cephadm [INF] Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-09T20:23:00.876 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:23:00.379965+0000 mgr.a (mgr.14406) 26 : cephadm [INF] Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-09T20:23:00.876 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:00 vm08 bash[23232]: cephadm 2026-03-09T20:23:00.379965+0000 mgr.a (mgr.14406) 26 : cephadm [INF] Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.581061+0000 mgr.a (mgr.14406) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.581061+0000 mgr.a (mgr.14406) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.584549+0000 mgr.a (mgr.14406) 15 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.584549+0000 mgr.a (mgr.14406) 15 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.586531+0000 mgr.a (mgr.14406) 16 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.586531+0000 mgr.a (mgr.14406) 16 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.622844+0000 mgr.a (mgr.14406) 17 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.622844+0000 mgr.a (mgr.14406) 17 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.626069+0000 mgr.a (mgr.14406) 18 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.626069+0000 mgr.a (mgr.14406) 18 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.627747+0000 mgr.a (mgr.14406) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.627747+0000 mgr.a (mgr.14406) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.666033+0000 mgr.a (mgr.14406) 20 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.666033+0000 mgr.a (mgr.14406) 20 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.667213+0000 mgr.a (mgr.14406) 21 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.667213+0000 mgr.a (mgr.14406) 21 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.669421+0000 mgr.a (mgr.14406) 22 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.669421+0000 mgr.a (mgr.14406) 22 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: audit 2026-03-09T20:22:59.748436+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: audit 2026-03-09T20:22:59.748436+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.765982+0000 mgr.a (mgr.14406) 23 : cephadm [INF] Reconfiguring grafana.vm03 (dependencies changed)... 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.765982+0000 mgr.a (mgr.14406) 23 : cephadm [INF] Reconfiguring grafana.vm03 (dependencies changed)... 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.801063+0000 mgr.a (mgr.14406) 24 : cephadm [INF] Reconfiguring daemon grafana.vm03 on vm03 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:22:59.801063+0000 mgr.a (mgr.14406) 24 : cephadm [INF] Reconfiguring daemon grafana.vm03 on vm03 2026-03-09T20:23:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: audit 2026-03-09T20:23:00.370266+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: audit 2026-03-09T20:23:00.370266+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: audit 2026-03-09T20:23:00.374422+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: audit 2026-03-09T20:23:00.374422+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:00.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:23:00.375893+0000 mgr.a (mgr.14406) 25 : cephadm [INF] Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-09T20:23:00.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:23:00.375893+0000 mgr.a (mgr.14406) 25 : cephadm [INF] Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-09T20:23:00.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:23:00.379965+0000 mgr.a (mgr.14406) 26 : cephadm [INF] Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-09T20:23:00.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20708]: cephadm 2026-03-09T20:23:00.379965+0000 mgr.a (mgr.14406) 26 : cephadm [INF] Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-09T20:23:01.117 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.581061+0000 mgr.a (mgr.14406) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.581061+0000 mgr.a (mgr.14406) 14 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.584549+0000 mgr.a (mgr.14406) 15 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.584549+0000 mgr.a (mgr.14406) 15 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.586531+0000 mgr.a (mgr.14406) 16 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.586531+0000 mgr.a (mgr.14406) 16 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.conf 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.622844+0000 mgr.a (mgr.14406) 17 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.622844+0000 mgr.a (mgr.14406) 17 : cephadm [INF] Updating vm03:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.626069+0000 mgr.a (mgr.14406) 18 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.626069+0000 mgr.a (mgr.14406) 18 : cephadm [INF] Updating vm04:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.627747+0000 mgr.a (mgr.14406) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.627747+0000 mgr.a (mgr.14406) 19 : cephadm [INF] Updating vm08:/etc/ceph/ceph.client.admin.keyring 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.666033+0000 mgr.a (mgr.14406) 20 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.666033+0000 mgr.a (mgr.14406) 20 : cephadm [INF] Updating vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.667213+0000 mgr.a (mgr.14406) 21 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.667213+0000 mgr.a (mgr.14406) 21 : cephadm [INF] Updating vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.669421+0000 mgr.a (mgr.14406) 22 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.669421+0000 mgr.a (mgr.14406) 22 : cephadm [INF] Updating vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/config/ceph.client.admin.keyring 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: audit 2026-03-09T20:22:59.748436+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: audit 2026-03-09T20:22:59.748436+0000 mon.a (mon.0) 523 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.765982+0000 mgr.a (mgr.14406) 23 : cephadm [INF] Reconfiguring grafana.vm03 (dependencies changed)... 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.765982+0000 mgr.a (mgr.14406) 23 : cephadm [INF] Reconfiguring grafana.vm03 (dependencies changed)... 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.801063+0000 mgr.a (mgr.14406) 24 : cephadm [INF] Reconfiguring daemon grafana.vm03 on vm03 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:22:59.801063+0000 mgr.a (mgr.14406) 24 : cephadm [INF] Reconfiguring daemon grafana.vm03 on vm03 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: audit 2026-03-09T20:23:00.370266+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: audit 2026-03-09T20:23:00.370266+0000 mon.a (mon.0) 524 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: audit 2026-03-09T20:23:00.374422+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: audit 2026-03-09T20:23:00.374422+0000 mon.a (mon.0) 525 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:23:00.375893+0000 mgr.a (mgr.14406) 25 : cephadm [INF] Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:23:00.375893+0000 mgr.a (mgr.14406) 25 : cephadm [INF] Reconfiguring alertmanager.vm08 (dependencies changed)... 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:23:00.379965+0000 mgr.a (mgr.14406) 26 : cephadm [INF] Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-09T20:23:01.118 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:00 vm04 bash[22793]: cephadm 2026-03-09T20:23:00.379965+0000 mgr.a (mgr.14406) 26 : cephadm [INF] Reconfiguring daemon alertmanager.vm08 on vm08 2026-03-09T20:23:01.406 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:00 vm03 bash[20968]: [09/Mar/2026:20:23:00] ENGINE Bus STOPPING 2026-03-09T20:23:01.406 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20968]: [09/Mar/2026:20:23:01] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T20:23:01.406 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20968]: [09/Mar/2026:20:23:01] ENGINE Bus STOPPED 2026-03-09T20:23:01.406 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20968]: [09/Mar/2026:20:23:01] ENGINE Bus STARTING 2026-03-09T20:23:01.657 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20968]: [09/Mar/2026:20:23:01] ENGINE Serving on http://:::9283 2026-03-09T20:23:01.657 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20968]: [09/Mar/2026:20:23:01] ENGINE Bus STARTED 2026-03-09T20:23:01.657 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20968]: [09/Mar/2026:20:23:01] ENGINE Bus STOPPING 2026-03-09T20:23:02.307 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.919924+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.307 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.919924+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.925021+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.925021+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.928713+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.928713+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.929148+0000 mgr.a (mgr.14406) 27 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.929148+0000 mgr.a (mgr.14406) 27 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.930026+0000 mon.b (mon.2) 33 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.930026+0000 mon.b (mon.2) 33 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.930326+0000 mgr.a (mgr.14406) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.930326+0000 mgr.a (mgr.14406) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.933484+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.933484+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.941190+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.941190+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.941504+0000 mgr.a (mgr.14406) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.941504+0000 mgr.a (mgr.14406) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.942113+0000 mon.b (mon.2) 35 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.942113+0000 mon.b (mon.2) 35 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.942391+0000 mgr.a (mgr.14406) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.942391+0000 mgr.a (mgr.14406) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.947739+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.947739+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.979193+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: audit 2026-03-09T20:23:00.979193+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: cluster 2026-03-09T20:23:01.436907+0000 mgr.a (mgr.14406) 31 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T20:23:02.308 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:01 vm08 bash[23232]: cluster 2026-03-09T20:23:01.436907+0000 mgr.a (mgr.14406) 31 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T20:23:02.367 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.919924+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.367 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.919924+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.367 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.925021+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.367 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.925021+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.367 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.928713+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.928713+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.929148+0000 mgr.a (mgr.14406) 27 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.929148+0000 mgr.a (mgr.14406) 27 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.930026+0000 mon.b (mon.2) 33 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.930026+0000 mon.b (mon.2) 33 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.930326+0000 mgr.a (mgr.14406) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.930326+0000 mgr.a (mgr.14406) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.933484+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.933484+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.941190+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.941190+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.941504+0000 mgr.a (mgr.14406) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.941504+0000 mgr.a (mgr.14406) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.942113+0000 mon.b (mon.2) 35 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.942113+0000 mon.b (mon.2) 35 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.942391+0000 mgr.a (mgr.14406) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.942391+0000 mgr.a (mgr.14406) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.947739+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.947739+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.979193+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: audit 2026-03-09T20:23:00.979193+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: cluster 2026-03-09T20:23:01.436907+0000 mgr.a (mgr.14406) 31 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T20:23:02.368 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:01 vm04 bash[22793]: cluster 2026-03-09T20:23:01.436907+0000 mgr.a (mgr.14406) 31 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.919924+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.919924+0000 mon.a (mon.0) 526 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.925021+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.925021+0000 mon.a (mon.0) 527 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.928713+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.928713+0000 mon.b (mon.2) 32 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.929148+0000 mgr.a (mgr.14406) 27 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.929148+0000 mgr.a (mgr.14406) 27 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.930026+0000 mon.b (mon.2) 33 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.930026+0000 mon.b (mon.2) 33 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.930326+0000 mgr.a (mgr.14406) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.930326+0000 mgr.a (mgr.14406) 28 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://vm03.local:3000"}]: dispatch 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.933484+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.933484+0000 mon.a (mon.0) 528 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.941190+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.941190+0000 mon.b (mon.2) 34 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.941504+0000 mgr.a (mgr.14406) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.941504+0000 mgr.a (mgr.14406) 29 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.942113+0000 mon.b (mon.2) 35 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-09T20:23:02.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.942113+0000 mon.b (mon.2) 35 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-09T20:23:02.408 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.942391+0000 mgr.a (mgr.14406) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-09T20:23:02.408 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.942391+0000 mgr.a (mgr.14406) 30 : audit [DBG] from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://vm08.local:9093"}]: dispatch 2026-03-09T20:23:02.408 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.947739+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.408 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.947739+0000 mon.a (mon.0) 529 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:02.408 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.979193+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:23:02.408 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: audit 2026-03-09T20:23:00.979193+0000 mon.b (mon.2) 36 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:23:02.408 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: cluster 2026-03-09T20:23:01.436907+0000 mgr.a (mgr.14406) 31 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T20:23:02.408 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:01 vm03 bash[20708]: cluster 2026-03-09T20:23:01.436907+0000 mgr.a (mgr.14406) 31 : cluster [DBG] pgmap v7: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 0 B/s wr, 10 op/s 2026-03-09T20:23:02.408 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:02 vm03 bash[20968]: [09/Mar/2026:20:23:02] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down 2026-03-09T20:23:02.408 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:02 vm03 bash[20968]: [09/Mar/2026:20:23:02] ENGINE Bus STOPPED 2026-03-09T20:23:02.408 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:02 vm03 bash[20968]: [09/Mar/2026:20:23:02] ENGINE Bus STARTING 2026-03-09T20:23:02.408 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:02 vm03 bash[20968]: [09/Mar/2026:20:23:02] ENGINE Serving on http://:::9283 2026-03-09T20:23:02.408 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:02 vm03 bash[20968]: [09/Mar/2026:20:23:02] ENGINE Bus STARTED 2026-03-09T20:23:04.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:04 vm08 bash[23232]: cluster 2026-03-09T20:23:03.437142+0000 mgr.a (mgr.14406) 32 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T20:23:04.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:04 vm08 bash[23232]: cluster 2026-03-09T20:23:03.437142+0000 mgr.a (mgr.14406) 32 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T20:23:04.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:04 vm04 bash[22793]: cluster 2026-03-09T20:23:03.437142+0000 mgr.a (mgr.14406) 32 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T20:23:04.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:04 vm04 bash[22793]: cluster 2026-03-09T20:23:03.437142+0000 mgr.a (mgr.14406) 32 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T20:23:04.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:04 vm03 bash[20708]: cluster 2026-03-09T20:23:03.437142+0000 mgr.a (mgr.14406) 32 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T20:23:04.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:04 vm03 bash[20708]: cluster 2026-03-09T20:23:03.437142+0000 mgr.a (mgr.14406) 32 : cluster [DBG] pgmap v8: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 8 op/s 2026-03-09T20:23:06.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:05 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:23:05] "GET /metrics HTTP/1.1" 200 20068 "" "Prometheus/2.51.0" 2026-03-09T20:23:06.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: cluster 2026-03-09T20:23:05.437391+0000 mgr.a (mgr.14406) 33 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T20:23:06.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: cluster 2026-03-09T20:23:05.437391+0000 mgr.a (mgr.14406) 33 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T20:23:06.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: audit 2026-03-09T20:23:05.995193+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: audit 2026-03-09T20:23:05.995193+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: audit 2026-03-09T20:23:06.001965+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: audit 2026-03-09T20:23:06.001965+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: audit 2026-03-09T20:23:06.092596+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: audit 2026-03-09T20:23:06.092596+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: audit 2026-03-09T20:23:06.096745+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: audit 2026-03-09T20:23:06.096745+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: audit 2026-03-09T20:23:06.098358+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: audit 2026-03-09T20:23:06.098358+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: audit 2026-03-09T20:23:06.098957+0000 mon.b (mon.2) 38 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: audit 2026-03-09T20:23:06.098957+0000 mon.b (mon.2) 38 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: audit 2026-03-09T20:23:06.102231+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:06 vm08 bash[23232]: audit 2026-03-09T20:23:06.102231+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: cluster 2026-03-09T20:23:05.437391+0000 mgr.a (mgr.14406) 33 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: cluster 2026-03-09T20:23:05.437391+0000 mgr.a (mgr.14406) 33 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: audit 2026-03-09T20:23:05.995193+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: audit 2026-03-09T20:23:05.995193+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: audit 2026-03-09T20:23:06.001965+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: audit 2026-03-09T20:23:06.001965+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: audit 2026-03-09T20:23:06.092596+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: audit 2026-03-09T20:23:06.092596+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: audit 2026-03-09T20:23:06.096745+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: audit 2026-03-09T20:23:06.096745+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: audit 2026-03-09T20:23:06.098358+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: audit 2026-03-09T20:23:06.098358+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: audit 2026-03-09T20:23:06.098957+0000 mon.b (mon.2) 38 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: audit 2026-03-09T20:23:06.098957+0000 mon.b (mon.2) 38 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: audit 2026-03-09T20:23:06.102231+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:06 vm04 bash[22793]: audit 2026-03-09T20:23:06.102231+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: cluster 2026-03-09T20:23:05.437391+0000 mgr.a (mgr.14406) 33 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: cluster 2026-03-09T20:23:05.437391+0000 mgr.a (mgr.14406) 33 : cluster [DBG] pgmap v9: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: audit 2026-03-09T20:23:05.995193+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: audit 2026-03-09T20:23:05.995193+0000 mon.a (mon.0) 530 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: audit 2026-03-09T20:23:06.001965+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: audit 2026-03-09T20:23:06.001965+0000 mon.a (mon.0) 531 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: audit 2026-03-09T20:23:06.092596+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: audit 2026-03-09T20:23:06.092596+0000 mon.a (mon.0) 532 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: audit 2026-03-09T20:23:06.096745+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: audit 2026-03-09T20:23:06.096745+0000 mon.a (mon.0) 533 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: audit 2026-03-09T20:23:06.098358+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: audit 2026-03-09T20:23:06.098358+0000 mon.b (mon.2) 37 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: audit 2026-03-09T20:23:06.098957+0000 mon.b (mon.2) 38 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: audit 2026-03-09T20:23:06.098957+0000 mon.b (mon.2) 38 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: audit 2026-03-09T20:23:06.102231+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:06 vm03 bash[20708]: audit 2026-03-09T20:23:06.102231+0000 mon.a (mon.0) 534 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:23:08.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:08 vm08 bash[23232]: cluster 2026-03-09T20:23:07.437609+0000 mgr.a (mgr.14406) 34 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:08.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:08 vm08 bash[23232]: cluster 2026-03-09T20:23:07.437609+0000 mgr.a (mgr.14406) 34 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:08.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:08 vm08 bash[23232]: audit 2026-03-09T20:23:08.481846+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:08.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:08 vm08 bash[23232]: audit 2026-03-09T20:23:08.481846+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:08.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:08 vm04 bash[22793]: cluster 2026-03-09T20:23:07.437609+0000 mgr.a (mgr.14406) 34 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:08.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:08 vm04 bash[22793]: cluster 2026-03-09T20:23:07.437609+0000 mgr.a (mgr.14406) 34 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:08.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:08 vm04 bash[22793]: audit 2026-03-09T20:23:08.481846+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:08.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:08 vm04 bash[22793]: audit 2026-03-09T20:23:08.481846+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:08 vm03 bash[20708]: cluster 2026-03-09T20:23:07.437609+0000 mgr.a (mgr.14406) 34 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:08 vm03 bash[20708]: cluster 2026-03-09T20:23:07.437609+0000 mgr.a (mgr.14406) 34 : cluster [DBG] pgmap v10: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:08 vm03 bash[20708]: audit 2026-03-09T20:23:08.481846+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:08 vm03 bash[20708]: audit 2026-03-09T20:23:08.481846+0000 mon.b (mon.2) 39 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:10.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:10 vm08 bash[23232]: cluster 2026-03-09T20:23:09.437779+0000 mgr.a (mgr.14406) 35 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:10.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:10 vm08 bash[23232]: cluster 2026-03-09T20:23:09.437779+0000 mgr.a (mgr.14406) 35 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:10.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:10 vm04 bash[22793]: cluster 2026-03-09T20:23:09.437779+0000 mgr.a (mgr.14406) 35 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:10.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:10 vm04 bash[22793]: cluster 2026-03-09T20:23:09.437779+0000 mgr.a (mgr.14406) 35 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:10.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:10 vm03 bash[20708]: cluster 2026-03-09T20:23:09.437779+0000 mgr.a (mgr.14406) 35 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:10.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:10 vm03 bash[20708]: cluster 2026-03-09T20:23:09.437779+0000 mgr.a (mgr.14406) 35 : cluster [DBG] pgmap v11: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:12.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:12 vm08 bash[23232]: cluster 2026-03-09T20:23:11.437990+0000 mgr.a (mgr.14406) 36 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:12.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:12 vm08 bash[23232]: cluster 2026-03-09T20:23:11.437990+0000 mgr.a (mgr.14406) 36 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:12.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:12 vm04 bash[22793]: cluster 2026-03-09T20:23:11.437990+0000 mgr.a (mgr.14406) 36 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:12.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:12 vm04 bash[22793]: cluster 2026-03-09T20:23:11.437990+0000 mgr.a (mgr.14406) 36 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:12.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:12 vm03 bash[20708]: cluster 2026-03-09T20:23:11.437990+0000 mgr.a (mgr.14406) 36 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:12.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:12 vm03 bash[20708]: cluster 2026-03-09T20:23:11.437990+0000 mgr.a (mgr.14406) 36 : cluster [DBG] pgmap v12: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 5 op/s 2026-03-09T20:23:14.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:14 vm08 bash[23232]: cluster 2026-03-09T20:23:13.438220+0000 mgr.a (mgr.14406) 37 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:14.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:14 vm08 bash[23232]: cluster 2026-03-09T20:23:13.438220+0000 mgr.a (mgr.14406) 37 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:14.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:14 vm04 bash[22793]: cluster 2026-03-09T20:23:13.438220+0000 mgr.a (mgr.14406) 37 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:14.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:14 vm04 bash[22793]: cluster 2026-03-09T20:23:13.438220+0000 mgr.a (mgr.14406) 37 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:14.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:14 vm03 bash[20708]: cluster 2026-03-09T20:23:13.438220+0000 mgr.a (mgr.14406) 37 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:14.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:14 vm03 bash[20708]: cluster 2026-03-09T20:23:13.438220+0000 mgr.a (mgr.14406) 37 : cluster [DBG] pgmap v13: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:16.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:15 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:23:15] "GET /metrics HTTP/1.1" 200 21332 "" "Prometheus/2.51.0" 2026-03-09T20:23:16.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:16 vm08 bash[23232]: cluster 2026-03-09T20:23:15.438474+0000 mgr.a (mgr.14406) 38 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:16.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:16 vm08 bash[23232]: cluster 2026-03-09T20:23:15.438474+0000 mgr.a (mgr.14406) 38 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:16.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:16 vm04 bash[22793]: cluster 2026-03-09T20:23:15.438474+0000 mgr.a (mgr.14406) 38 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:16.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:16 vm04 bash[22793]: cluster 2026-03-09T20:23:15.438474+0000 mgr.a (mgr.14406) 38 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:16.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:16 vm03 bash[20708]: cluster 2026-03-09T20:23:15.438474+0000 mgr.a (mgr.14406) 38 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:16.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:16 vm03 bash[20708]: cluster 2026-03-09T20:23:15.438474+0000 mgr.a (mgr.14406) 38 : cluster [DBG] pgmap v14: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:18.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:18 vm08 bash[23232]: cluster 2026-03-09T20:23:17.438724+0000 mgr.a (mgr.14406) 39 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:18.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:18 vm08 bash[23232]: cluster 2026-03-09T20:23:17.438724+0000 mgr.a (mgr.14406) 39 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:18.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:18 vm04 bash[22793]: cluster 2026-03-09T20:23:17.438724+0000 mgr.a (mgr.14406) 39 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:18.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:18 vm04 bash[22793]: cluster 2026-03-09T20:23:17.438724+0000 mgr.a (mgr.14406) 39 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:18.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:18 vm03 bash[20708]: cluster 2026-03-09T20:23:17.438724+0000 mgr.a (mgr.14406) 39 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:18.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:18 vm03 bash[20708]: cluster 2026-03-09T20:23:17.438724+0000 mgr.a (mgr.14406) 39 : cluster [DBG] pgmap v15: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:20.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:20 vm08 bash[23232]: cluster 2026-03-09T20:23:19.438969+0000 mgr.a (mgr.14406) 40 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:20.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:20 vm08 bash[23232]: cluster 2026-03-09T20:23:19.438969+0000 mgr.a (mgr.14406) 40 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:20.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:20 vm04 bash[22793]: cluster 2026-03-09T20:23:19.438969+0000 mgr.a (mgr.14406) 40 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:20.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:20 vm04 bash[22793]: cluster 2026-03-09T20:23:19.438969+0000 mgr.a (mgr.14406) 40 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:20.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:20 vm03 bash[20708]: cluster 2026-03-09T20:23:19.438969+0000 mgr.a (mgr.14406) 40 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:20.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:20 vm03 bash[20708]: cluster 2026-03-09T20:23:19.438969+0000 mgr.a (mgr.14406) 40 : cluster [DBG] pgmap v16: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:22.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:22 vm08 bash[23232]: cluster 2026-03-09T20:23:21.439286+0000 mgr.a (mgr.14406) 41 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:22.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:22 vm08 bash[23232]: cluster 2026-03-09T20:23:21.439286+0000 mgr.a (mgr.14406) 41 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:22.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:22 vm04 bash[22793]: cluster 2026-03-09T20:23:21.439286+0000 mgr.a (mgr.14406) 41 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:22.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:22 vm04 bash[22793]: cluster 2026-03-09T20:23:21.439286+0000 mgr.a (mgr.14406) 41 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:22.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:22 vm03 bash[20708]: cluster 2026-03-09T20:23:21.439286+0000 mgr.a (mgr.14406) 41 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:22.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:22 vm03 bash[20708]: cluster 2026-03-09T20:23:21.439286+0000 mgr.a (mgr.14406) 41 : cluster [DBG] pgmap v17: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:23.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:23 vm08 bash[23232]: audit 2026-03-09T20:23:23.482369+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:23.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:23 vm08 bash[23232]: audit 2026-03-09T20:23:23.482369+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:23.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:23 vm04 bash[22793]: audit 2026-03-09T20:23:23.482369+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:23.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:23 vm04 bash[22793]: audit 2026-03-09T20:23:23.482369+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:23 vm03 bash[20708]: audit 2026-03-09T20:23:23.482369+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:23 vm03 bash[20708]: audit 2026-03-09T20:23:23.482369+0000 mon.b (mon.2) 40 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:24.807 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:24 vm08 bash[23232]: cluster 2026-03-09T20:23:23.439514+0000 mgr.a (mgr.14406) 42 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:24.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:24 vm08 bash[23232]: cluster 2026-03-09T20:23:23.439514+0000 mgr.a (mgr.14406) 42 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:24.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:24 vm04 bash[22793]: cluster 2026-03-09T20:23:23.439514+0000 mgr.a (mgr.14406) 42 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:24.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:24 vm04 bash[22793]: cluster 2026-03-09T20:23:23.439514+0000 mgr.a (mgr.14406) 42 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:24.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:24 vm03 bash[20708]: cluster 2026-03-09T20:23:23.439514+0000 mgr.a (mgr.14406) 42 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:24.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:24 vm03 bash[20708]: cluster 2026-03-09T20:23:23.439514+0000 mgr.a (mgr.14406) 42 : cluster [DBG] pgmap v18: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:25.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:25 vm04 bash[22793]: cluster 2026-03-09T20:23:25.439716+0000 mgr.a (mgr.14406) 43 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:25.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:25 vm04 bash[22793]: cluster 2026-03-09T20:23:25.439716+0000 mgr.a (mgr.14406) 43 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:25.883 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:25 vm03 bash[20708]: cluster 2026-03-09T20:23:25.439716+0000 mgr.a (mgr.14406) 43 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:25.883 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:25 vm03 bash[20708]: cluster 2026-03-09T20:23:25.439716+0000 mgr.a (mgr.14406) 43 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:26.057 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:25 vm08 bash[23232]: cluster 2026-03-09T20:23:25.439716+0000 mgr.a (mgr.14406) 43 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:26.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:25 vm08 bash[23232]: cluster 2026-03-09T20:23:25.439716+0000 mgr.a (mgr.14406) 43 : cluster [DBG] pgmap v19: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:26.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:25 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:23:25] "GET /metrics HTTP/1.1" 200 21329 "" "Prometheus/2.51.0" 2026-03-09T20:23:28.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:28 vm08 bash[23232]: cluster 2026-03-09T20:23:27.439893+0000 mgr.a (mgr.14406) 44 : cluster [DBG] pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:28.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:28 vm08 bash[23232]: cluster 2026-03-09T20:23:27.439893+0000 mgr.a (mgr.14406) 44 : cluster [DBG] pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:28.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:28 vm04 bash[22793]: cluster 2026-03-09T20:23:27.439893+0000 mgr.a (mgr.14406) 44 : cluster [DBG] pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:28.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:28 vm04 bash[22793]: cluster 2026-03-09T20:23:27.439893+0000 mgr.a (mgr.14406) 44 : cluster [DBG] pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:28.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:28 vm03 bash[20708]: cluster 2026-03-09T20:23:27.439893+0000 mgr.a (mgr.14406) 44 : cluster [DBG] pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:28.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:28 vm03 bash[20708]: cluster 2026-03-09T20:23:27.439893+0000 mgr.a (mgr.14406) 44 : cluster [DBG] pgmap v20: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:30.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:30 vm08 bash[23232]: cluster 2026-03-09T20:23:29.440065+0000 mgr.a (mgr.14406) 45 : cluster [DBG] pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:30.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:30 vm08 bash[23232]: cluster 2026-03-09T20:23:29.440065+0000 mgr.a (mgr.14406) 45 : cluster [DBG] pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:30.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:30 vm04 bash[22793]: cluster 2026-03-09T20:23:29.440065+0000 mgr.a (mgr.14406) 45 : cluster [DBG] pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:30.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:30 vm04 bash[22793]: cluster 2026-03-09T20:23:29.440065+0000 mgr.a (mgr.14406) 45 : cluster [DBG] pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:30.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:30 vm03 bash[20708]: cluster 2026-03-09T20:23:29.440065+0000 mgr.a (mgr.14406) 45 : cluster [DBG] pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:30.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:30 vm03 bash[20708]: cluster 2026-03-09T20:23:29.440065+0000 mgr.a (mgr.14406) 45 : cluster [DBG] pgmap v21: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:32.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:32 vm08 bash[23232]: cluster 2026-03-09T20:23:31.440238+0000 mgr.a (mgr.14406) 46 : cluster [DBG] pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:32.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:32 vm08 bash[23232]: cluster 2026-03-09T20:23:31.440238+0000 mgr.a (mgr.14406) 46 : cluster [DBG] pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:32.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:32 vm04 bash[22793]: cluster 2026-03-09T20:23:31.440238+0000 mgr.a (mgr.14406) 46 : cluster [DBG] pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:32.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:32 vm04 bash[22793]: cluster 2026-03-09T20:23:31.440238+0000 mgr.a (mgr.14406) 46 : cluster [DBG] pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:32 vm03 bash[20708]: cluster 2026-03-09T20:23:31.440238+0000 mgr.a (mgr.14406) 46 : cluster [DBG] pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:32 vm03 bash[20708]: cluster 2026-03-09T20:23:31.440238+0000 mgr.a (mgr.14406) 46 : cluster [DBG] pgmap v22: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:34.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:34 vm08 bash[23232]: cluster 2026-03-09T20:23:33.440437+0000 mgr.a (mgr.14406) 47 : cluster [DBG] pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:34.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:34 vm08 bash[23232]: cluster 2026-03-09T20:23:33.440437+0000 mgr.a (mgr.14406) 47 : cluster [DBG] pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:34.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:34 vm04 bash[22793]: cluster 2026-03-09T20:23:33.440437+0000 mgr.a (mgr.14406) 47 : cluster [DBG] pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:34.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:34 vm04 bash[22793]: cluster 2026-03-09T20:23:33.440437+0000 mgr.a (mgr.14406) 47 : cluster [DBG] pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:34.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:34 vm03 bash[20708]: cluster 2026-03-09T20:23:33.440437+0000 mgr.a (mgr.14406) 47 : cluster [DBG] pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:34.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:34 vm03 bash[20708]: cluster 2026-03-09T20:23:33.440437+0000 mgr.a (mgr.14406) 47 : cluster [DBG] pgmap v23: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:36.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:35 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:23:35] "GET /metrics HTTP/1.1" 200 21329 "" "Prometheus/2.51.0" 2026-03-09T20:23:36.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:36 vm08 bash[23232]: cluster 2026-03-09T20:23:35.440659+0000 mgr.a (mgr.14406) 48 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:36.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:36 vm08 bash[23232]: cluster 2026-03-09T20:23:35.440659+0000 mgr.a (mgr.14406) 48 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:36.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:36 vm04 bash[22793]: cluster 2026-03-09T20:23:35.440659+0000 mgr.a (mgr.14406) 48 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:36.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:36 vm04 bash[22793]: cluster 2026-03-09T20:23:35.440659+0000 mgr.a (mgr.14406) 48 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:36.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:36 vm03 bash[20708]: cluster 2026-03-09T20:23:35.440659+0000 mgr.a (mgr.14406) 48 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:36.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:36 vm03 bash[20708]: cluster 2026-03-09T20:23:35.440659+0000 mgr.a (mgr.14406) 48 : cluster [DBG] pgmap v24: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:38.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:38 vm08 bash[23232]: cluster 2026-03-09T20:23:37.440885+0000 mgr.a (mgr.14406) 49 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:38.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:38 vm08 bash[23232]: cluster 2026-03-09T20:23:37.440885+0000 mgr.a (mgr.14406) 49 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:38.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:38 vm08 bash[23232]: audit 2026-03-09T20:23:38.482844+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:38.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:38 vm08 bash[23232]: audit 2026-03-09T20:23:38.482844+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:38.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:38 vm04 bash[22793]: cluster 2026-03-09T20:23:37.440885+0000 mgr.a (mgr.14406) 49 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:38.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:38 vm04 bash[22793]: cluster 2026-03-09T20:23:37.440885+0000 mgr.a (mgr.14406) 49 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:38.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:38 vm04 bash[22793]: audit 2026-03-09T20:23:38.482844+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:38.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:38 vm04 bash[22793]: audit 2026-03-09T20:23:38.482844+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:38.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:38 vm03 bash[20708]: cluster 2026-03-09T20:23:37.440885+0000 mgr.a (mgr.14406) 49 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:38.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:38 vm03 bash[20708]: cluster 2026-03-09T20:23:37.440885+0000 mgr.a (mgr.14406) 49 : cluster [DBG] pgmap v25: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:38.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:38 vm03 bash[20708]: audit 2026-03-09T20:23:38.482844+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:38.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:38 vm03 bash[20708]: audit 2026-03-09T20:23:38.482844+0000 mon.b (mon.2) 41 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:40.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:40 vm08 bash[23232]: cluster 2026-03-09T20:23:39.441093+0000 mgr.a (mgr.14406) 50 : cluster [DBG] pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:40.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:40 vm08 bash[23232]: cluster 2026-03-09T20:23:39.441093+0000 mgr.a (mgr.14406) 50 : cluster [DBG] pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:40.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:40 vm04 bash[22793]: cluster 2026-03-09T20:23:39.441093+0000 mgr.a (mgr.14406) 50 : cluster [DBG] pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:40.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:40 vm04 bash[22793]: cluster 2026-03-09T20:23:39.441093+0000 mgr.a (mgr.14406) 50 : cluster [DBG] pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:40.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:40 vm03 bash[20708]: cluster 2026-03-09T20:23:39.441093+0000 mgr.a (mgr.14406) 50 : cluster [DBG] pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:40.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:40 vm03 bash[20708]: cluster 2026-03-09T20:23:39.441093+0000 mgr.a (mgr.14406) 50 : cluster [DBG] pgmap v26: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:42.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:42 vm08 bash[23232]: cluster 2026-03-09T20:23:41.441329+0000 mgr.a (mgr.14406) 51 : cluster [DBG] pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:42.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:42 vm08 bash[23232]: cluster 2026-03-09T20:23:41.441329+0000 mgr.a (mgr.14406) 51 : cluster [DBG] pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:42.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:42 vm04 bash[22793]: cluster 2026-03-09T20:23:41.441329+0000 mgr.a (mgr.14406) 51 : cluster [DBG] pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:42.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:42 vm04 bash[22793]: cluster 2026-03-09T20:23:41.441329+0000 mgr.a (mgr.14406) 51 : cluster [DBG] pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:42.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:42 vm03 bash[20708]: cluster 2026-03-09T20:23:41.441329+0000 mgr.a (mgr.14406) 51 : cluster [DBG] pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:42.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:42 vm03 bash[20708]: cluster 2026-03-09T20:23:41.441329+0000 mgr.a (mgr.14406) 51 : cluster [DBG] pgmap v27: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:43.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:43 vm04 bash[22793]: cluster 2026-03-09T20:23:43.441556+0000 mgr.a (mgr.14406) 52 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:43.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:43 vm04 bash[22793]: cluster 2026-03-09T20:23:43.441556+0000 mgr.a (mgr.14406) 52 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:43.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:43 vm03 bash[20708]: cluster 2026-03-09T20:23:43.441556+0000 mgr.a (mgr.14406) 52 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:43.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:43 vm03 bash[20708]: cluster 2026-03-09T20:23:43.441556+0000 mgr.a (mgr.14406) 52 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:44.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:43 vm08 bash[23232]: cluster 2026-03-09T20:23:43.441556+0000 mgr.a (mgr.14406) 52 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:44.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:43 vm08 bash[23232]: cluster 2026-03-09T20:23:43.441556+0000 mgr.a (mgr.14406) 52 : cluster [DBG] pgmap v28: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:46.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:45 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:23:45] "GET /metrics HTTP/1.1" 200 21330 "" "Prometheus/2.51.0" 2026-03-09T20:23:46.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:46 vm08 bash[23232]: cluster 2026-03-09T20:23:45.441804+0000 mgr.a (mgr.14406) 53 : cluster [DBG] pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:46.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:46 vm08 bash[23232]: cluster 2026-03-09T20:23:45.441804+0000 mgr.a (mgr.14406) 53 : cluster [DBG] pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:46.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:46 vm04 bash[22793]: cluster 2026-03-09T20:23:45.441804+0000 mgr.a (mgr.14406) 53 : cluster [DBG] pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:46.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:46 vm04 bash[22793]: cluster 2026-03-09T20:23:45.441804+0000 mgr.a (mgr.14406) 53 : cluster [DBG] pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:46.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:46 vm03 bash[20708]: cluster 2026-03-09T20:23:45.441804+0000 mgr.a (mgr.14406) 53 : cluster [DBG] pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:46.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:46 vm03 bash[20708]: cluster 2026-03-09T20:23:45.441804+0000 mgr.a (mgr.14406) 53 : cluster [DBG] pgmap v29: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:48.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:48 vm08 bash[23232]: cluster 2026-03-09T20:23:47.441984+0000 mgr.a (mgr.14406) 54 : cluster [DBG] pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:48.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:48 vm08 bash[23232]: cluster 2026-03-09T20:23:47.441984+0000 mgr.a (mgr.14406) 54 : cluster [DBG] pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:48.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:48 vm04 bash[22793]: cluster 2026-03-09T20:23:47.441984+0000 mgr.a (mgr.14406) 54 : cluster [DBG] pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:48.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:48 vm04 bash[22793]: cluster 2026-03-09T20:23:47.441984+0000 mgr.a (mgr.14406) 54 : cluster [DBG] pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:48.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:48 vm03 bash[20708]: cluster 2026-03-09T20:23:47.441984+0000 mgr.a (mgr.14406) 54 : cluster [DBG] pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:48.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:48 vm03 bash[20708]: cluster 2026-03-09T20:23:47.441984+0000 mgr.a (mgr.14406) 54 : cluster [DBG] pgmap v30: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:50.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:50 vm08 bash[23232]: cluster 2026-03-09T20:23:49.442162+0000 mgr.a (mgr.14406) 55 : cluster [DBG] pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:50.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:50 vm08 bash[23232]: cluster 2026-03-09T20:23:49.442162+0000 mgr.a (mgr.14406) 55 : cluster [DBG] pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:50.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:50 vm04 bash[22793]: cluster 2026-03-09T20:23:49.442162+0000 mgr.a (mgr.14406) 55 : cluster [DBG] pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:50.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:50 vm04 bash[22793]: cluster 2026-03-09T20:23:49.442162+0000 mgr.a (mgr.14406) 55 : cluster [DBG] pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:50.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:50 vm03 bash[20708]: cluster 2026-03-09T20:23:49.442162+0000 mgr.a (mgr.14406) 55 : cluster [DBG] pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:50.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:50 vm03 bash[20708]: cluster 2026-03-09T20:23:49.442162+0000 mgr.a (mgr.14406) 55 : cluster [DBG] pgmap v31: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:52.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:52 vm08 bash[23232]: cluster 2026-03-09T20:23:51.442387+0000 mgr.a (mgr.14406) 56 : cluster [DBG] pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:52.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:52 vm08 bash[23232]: cluster 2026-03-09T20:23:51.442387+0000 mgr.a (mgr.14406) 56 : cluster [DBG] pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:52.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:52 vm04 bash[22793]: cluster 2026-03-09T20:23:51.442387+0000 mgr.a (mgr.14406) 56 : cluster [DBG] pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:52.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:52 vm04 bash[22793]: cluster 2026-03-09T20:23:51.442387+0000 mgr.a (mgr.14406) 56 : cluster [DBG] pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:52.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:52 vm03 bash[20708]: cluster 2026-03-09T20:23:51.442387+0000 mgr.a (mgr.14406) 56 : cluster [DBG] pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:52.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:52 vm03 bash[20708]: cluster 2026-03-09T20:23:51.442387+0000 mgr.a (mgr.14406) 56 : cluster [DBG] pgmap v32: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:53.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:53 vm08 bash[23232]: audit 2026-03-09T20:23:53.482988+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:53.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:53 vm08 bash[23232]: audit 2026-03-09T20:23:53.482988+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:53.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:53 vm04 bash[22793]: audit 2026-03-09T20:23:53.482988+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:53.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:53 vm04 bash[22793]: audit 2026-03-09T20:23:53.482988+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:53.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:53 vm03 bash[20708]: audit 2026-03-09T20:23:53.482988+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:53.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:53 vm03 bash[20708]: audit 2026-03-09T20:23:53.482988+0000 mon.b (mon.2) 42 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:23:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:54 vm08 bash[23232]: cluster 2026-03-09T20:23:53.442577+0000 mgr.a (mgr.14406) 57 : cluster [DBG] pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:54 vm08 bash[23232]: cluster 2026-03-09T20:23:53.442577+0000 mgr.a (mgr.14406) 57 : cluster [DBG] pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:54.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:54 vm04 bash[22793]: cluster 2026-03-09T20:23:53.442577+0000 mgr.a (mgr.14406) 57 : cluster [DBG] pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:54.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:54 vm04 bash[22793]: cluster 2026-03-09T20:23:53.442577+0000 mgr.a (mgr.14406) 57 : cluster [DBG] pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:54 vm03 bash[20708]: cluster 2026-03-09T20:23:53.442577+0000 mgr.a (mgr.14406) 57 : cluster [DBG] pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:54 vm03 bash[20708]: cluster 2026-03-09T20:23:53.442577+0000 mgr.a (mgr.14406) 57 : cluster [DBG] pgmap v33: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:56.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:23:55 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:23:55] "GET /metrics HTTP/1.1" 200 21326 "" "Prometheus/2.51.0" 2026-03-09T20:23:56.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:56 vm08 bash[23232]: cluster 2026-03-09T20:23:55.442788+0000 mgr.a (mgr.14406) 58 : cluster [DBG] pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:56.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:56 vm08 bash[23232]: cluster 2026-03-09T20:23:55.442788+0000 mgr.a (mgr.14406) 58 : cluster [DBG] pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:56.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:56 vm04 bash[22793]: cluster 2026-03-09T20:23:55.442788+0000 mgr.a (mgr.14406) 58 : cluster [DBG] pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:56.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:56 vm04 bash[22793]: cluster 2026-03-09T20:23:55.442788+0000 mgr.a (mgr.14406) 58 : cluster [DBG] pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:56 vm03 bash[20708]: cluster 2026-03-09T20:23:55.442788+0000 mgr.a (mgr.14406) 58 : cluster [DBG] pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:56 vm03 bash[20708]: cluster 2026-03-09T20:23:55.442788+0000 mgr.a (mgr.14406) 58 : cluster [DBG] pgmap v34: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:58.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:58 vm08 bash[23232]: cluster 2026-03-09T20:23:57.442960+0000 mgr.a (mgr.14406) 59 : cluster [DBG] pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:58.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:23:58 vm08 bash[23232]: cluster 2026-03-09T20:23:57.442960+0000 mgr.a (mgr.14406) 59 : cluster [DBG] pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:58.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:58 vm04 bash[22793]: cluster 2026-03-09T20:23:57.442960+0000 mgr.a (mgr.14406) 59 : cluster [DBG] pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:58.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:23:58 vm04 bash[22793]: cluster 2026-03-09T20:23:57.442960+0000 mgr.a (mgr.14406) 59 : cluster [DBG] pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:58 vm03 bash[20708]: cluster 2026-03-09T20:23:57.442960+0000 mgr.a (mgr.14406) 59 : cluster [DBG] pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:23:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:23:58 vm03 bash[20708]: cluster 2026-03-09T20:23:57.442960+0000 mgr.a (mgr.14406) 59 : cluster [DBG] pgmap v35: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:00.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:00 vm08 bash[23232]: cluster 2026-03-09T20:23:59.443163+0000 mgr.a (mgr.14406) 60 : cluster [DBG] pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:00.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:00 vm08 bash[23232]: cluster 2026-03-09T20:23:59.443163+0000 mgr.a (mgr.14406) 60 : cluster [DBG] pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:00.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:00 vm04 bash[22793]: cluster 2026-03-09T20:23:59.443163+0000 mgr.a (mgr.14406) 60 : cluster [DBG] pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:00.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:00 vm04 bash[22793]: cluster 2026-03-09T20:23:59.443163+0000 mgr.a (mgr.14406) 60 : cluster [DBG] pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:00 vm03 bash[20708]: cluster 2026-03-09T20:23:59.443163+0000 mgr.a (mgr.14406) 60 : cluster [DBG] pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:00 vm03 bash[20708]: cluster 2026-03-09T20:23:59.443163+0000 mgr.a (mgr.14406) 60 : cluster [DBG] pgmap v36: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:02.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:02 vm08 bash[23232]: cluster 2026-03-09T20:24:01.443346+0000 mgr.a (mgr.14406) 61 : cluster [DBG] pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:02.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:02 vm08 bash[23232]: cluster 2026-03-09T20:24:01.443346+0000 mgr.a (mgr.14406) 61 : cluster [DBG] pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:02.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:02 vm04 bash[22793]: cluster 2026-03-09T20:24:01.443346+0000 mgr.a (mgr.14406) 61 : cluster [DBG] pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:02.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:02 vm04 bash[22793]: cluster 2026-03-09T20:24:01.443346+0000 mgr.a (mgr.14406) 61 : cluster [DBG] pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:02.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:02 vm03 bash[20708]: cluster 2026-03-09T20:24:01.443346+0000 mgr.a (mgr.14406) 61 : cluster [DBG] pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:02.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:02 vm03 bash[20708]: cluster 2026-03-09T20:24:01.443346+0000 mgr.a (mgr.14406) 61 : cluster [DBG] pgmap v37: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:03.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:03 vm04 bash[22793]: cluster 2026-03-09T20:24:03.443534+0000 mgr.a (mgr.14406) 62 : cluster [DBG] pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:03.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:03 vm04 bash[22793]: cluster 2026-03-09T20:24:03.443534+0000 mgr.a (mgr.14406) 62 : cluster [DBG] pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:03.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:03 vm03 bash[20708]: cluster 2026-03-09T20:24:03.443534+0000 mgr.a (mgr.14406) 62 : cluster [DBG] pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:03.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:03 vm03 bash[20708]: cluster 2026-03-09T20:24:03.443534+0000 mgr.a (mgr.14406) 62 : cluster [DBG] pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:04.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:03 vm08 bash[23232]: cluster 2026-03-09T20:24:03.443534+0000 mgr.a (mgr.14406) 62 : cluster [DBG] pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:04.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:03 vm08 bash[23232]: cluster 2026-03-09T20:24:03.443534+0000 mgr.a (mgr.14406) 62 : cluster [DBG] pgmap v38: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:06.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:24:05 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:24:05] "GET /metrics HTTP/1.1" 200 21326 "" "Prometheus/2.51.0" 2026-03-09T20:24:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:06 vm08 bash[23232]: cluster 2026-03-09T20:24:05.443726+0000 mgr.a (mgr.14406) 63 : cluster [DBG] pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:06 vm08 bash[23232]: cluster 2026-03-09T20:24:05.443726+0000 mgr.a (mgr.14406) 63 : cluster [DBG] pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:06 vm08 bash[23232]: audit 2026-03-09T20:24:06.147345+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:24:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:06 vm08 bash[23232]: audit 2026-03-09T20:24:06.147345+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:24:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:06 vm08 bash[23232]: audit 2026-03-09T20:24:06.461653+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:24:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:06 vm08 bash[23232]: audit 2026-03-09T20:24:06.461653+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:24:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:06 vm08 bash[23232]: audit 2026-03-09T20:24:06.462500+0000 mon.b (mon.2) 45 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:24:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:06 vm08 bash[23232]: audit 2026-03-09T20:24:06.462500+0000 mon.b (mon.2) 45 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:24:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:06 vm08 bash[23232]: audit 2026-03-09T20:24:06.467125+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:24:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:06 vm08 bash[23232]: audit 2026-03-09T20:24:06.467125+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:24:06.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:06 vm04 bash[22793]: cluster 2026-03-09T20:24:05.443726+0000 mgr.a (mgr.14406) 63 : cluster [DBG] pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:06.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:06 vm04 bash[22793]: cluster 2026-03-09T20:24:05.443726+0000 mgr.a (mgr.14406) 63 : cluster [DBG] pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:06.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:06 vm04 bash[22793]: audit 2026-03-09T20:24:06.147345+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:24:06.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:06 vm04 bash[22793]: audit 2026-03-09T20:24:06.147345+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:24:06.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:06 vm04 bash[22793]: audit 2026-03-09T20:24:06.461653+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:24:06.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:06 vm04 bash[22793]: audit 2026-03-09T20:24:06.461653+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:24:06.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:06 vm04 bash[22793]: audit 2026-03-09T20:24:06.462500+0000 mon.b (mon.2) 45 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:24:06.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:06 vm04 bash[22793]: audit 2026-03-09T20:24:06.462500+0000 mon.b (mon.2) 45 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:24:06.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:06 vm04 bash[22793]: audit 2026-03-09T20:24:06.467125+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:24:06.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:06 vm04 bash[22793]: audit 2026-03-09T20:24:06.467125+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:24:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:06 vm03 bash[20708]: cluster 2026-03-09T20:24:05.443726+0000 mgr.a (mgr.14406) 63 : cluster [DBG] pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:06 vm03 bash[20708]: cluster 2026-03-09T20:24:05.443726+0000 mgr.a (mgr.14406) 63 : cluster [DBG] pgmap v39: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:06 vm03 bash[20708]: audit 2026-03-09T20:24:06.147345+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:24:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:06 vm03 bash[20708]: audit 2026-03-09T20:24:06.147345+0000 mon.b (mon.2) 43 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:24:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:06 vm03 bash[20708]: audit 2026-03-09T20:24:06.461653+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:24:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:06 vm03 bash[20708]: audit 2026-03-09T20:24:06.461653+0000 mon.b (mon.2) 44 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:24:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:06 vm03 bash[20708]: audit 2026-03-09T20:24:06.462500+0000 mon.b (mon.2) 45 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:24:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:06 vm03 bash[20708]: audit 2026-03-09T20:24:06.462500+0000 mon.b (mon.2) 45 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:24:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:06 vm03 bash[20708]: audit 2026-03-09T20:24:06.467125+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:24:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:06 vm03 bash[20708]: audit 2026-03-09T20:24:06.467125+0000 mon.a (mon.0) 535 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:24:08.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:08 vm08 bash[23232]: cluster 2026-03-09T20:24:07.443957+0000 mgr.a (mgr.14406) 64 : cluster [DBG] pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:08.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:08 vm08 bash[23232]: cluster 2026-03-09T20:24:07.443957+0000 mgr.a (mgr.14406) 64 : cluster [DBG] pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:08.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:08 vm08 bash[23232]: audit 2026-03-09T20:24:08.483374+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:08.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:08 vm08 bash[23232]: audit 2026-03-09T20:24:08.483374+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:08 vm04 bash[22793]: cluster 2026-03-09T20:24:07.443957+0000 mgr.a (mgr.14406) 64 : cluster [DBG] pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:08 vm04 bash[22793]: cluster 2026-03-09T20:24:07.443957+0000 mgr.a (mgr.14406) 64 : cluster [DBG] pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:08 vm04 bash[22793]: audit 2026-03-09T20:24:08.483374+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:08 vm04 bash[22793]: audit 2026-03-09T20:24:08.483374+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:08 vm03 bash[20708]: cluster 2026-03-09T20:24:07.443957+0000 mgr.a (mgr.14406) 64 : cluster [DBG] pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:08 vm03 bash[20708]: cluster 2026-03-09T20:24:07.443957+0000 mgr.a (mgr.14406) 64 : cluster [DBG] pgmap v40: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:08 vm03 bash[20708]: audit 2026-03-09T20:24:08.483374+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:08 vm03 bash[20708]: audit 2026-03-09T20:24:08.483374+0000 mon.b (mon.2) 46 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:10.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:10 vm08 bash[23232]: cluster 2026-03-09T20:24:09.444157+0000 mgr.a (mgr.14406) 65 : cluster [DBG] pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:10.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:10 vm08 bash[23232]: cluster 2026-03-09T20:24:09.444157+0000 mgr.a (mgr.14406) 65 : cluster [DBG] pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:10.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:10 vm04 bash[22793]: cluster 2026-03-09T20:24:09.444157+0000 mgr.a (mgr.14406) 65 : cluster [DBG] pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:10.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:10 vm04 bash[22793]: cluster 2026-03-09T20:24:09.444157+0000 mgr.a (mgr.14406) 65 : cluster [DBG] pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:10.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:10 vm03 bash[20708]: cluster 2026-03-09T20:24:09.444157+0000 mgr.a (mgr.14406) 65 : cluster [DBG] pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:10.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:10 vm03 bash[20708]: cluster 2026-03-09T20:24:09.444157+0000 mgr.a (mgr.14406) 65 : cluster [DBG] pgmap v41: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:12.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:12 vm08 bash[23232]: cluster 2026-03-09T20:24:11.444337+0000 mgr.a (mgr.14406) 66 : cluster [DBG] pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:12.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:12 vm08 bash[23232]: cluster 2026-03-09T20:24:11.444337+0000 mgr.a (mgr.14406) 66 : cluster [DBG] pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:12.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:12 vm04 bash[22793]: cluster 2026-03-09T20:24:11.444337+0000 mgr.a (mgr.14406) 66 : cluster [DBG] pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:12.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:12 vm04 bash[22793]: cluster 2026-03-09T20:24:11.444337+0000 mgr.a (mgr.14406) 66 : cluster [DBG] pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:12.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:12 vm03 bash[20708]: cluster 2026-03-09T20:24:11.444337+0000 mgr.a (mgr.14406) 66 : cluster [DBG] pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:12.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:12 vm03 bash[20708]: cluster 2026-03-09T20:24:11.444337+0000 mgr.a (mgr.14406) 66 : cluster [DBG] pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:14.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:14 vm08 bash[23232]: cluster 2026-03-09T20:24:13.444584+0000 mgr.a (mgr.14406) 67 : cluster [DBG] pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:14.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:14 vm08 bash[23232]: cluster 2026-03-09T20:24:13.444584+0000 mgr.a (mgr.14406) 67 : cluster [DBG] pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:14.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:14 vm04 bash[22793]: cluster 2026-03-09T20:24:13.444584+0000 mgr.a (mgr.14406) 67 : cluster [DBG] pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:14.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:14 vm04 bash[22793]: cluster 2026-03-09T20:24:13.444584+0000 mgr.a (mgr.14406) 67 : cluster [DBG] pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:14.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:14 vm03 bash[20708]: cluster 2026-03-09T20:24:13.444584+0000 mgr.a (mgr.14406) 67 : cluster [DBG] pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:14.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:14 vm03 bash[20708]: cluster 2026-03-09T20:24:13.444584+0000 mgr.a (mgr.14406) 67 : cluster [DBG] pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:16.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:24:15 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:24:15] "GET /metrics HTTP/1.1" 200 21321 "" "Prometheus/2.51.0" 2026-03-09T20:24:16.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:16 vm08 bash[23232]: cluster 2026-03-09T20:24:15.444862+0000 mgr.a (mgr.14406) 68 : cluster [DBG] pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:16.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:16 vm08 bash[23232]: cluster 2026-03-09T20:24:15.444862+0000 mgr.a (mgr.14406) 68 : cluster [DBG] pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:16.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:16 vm04 bash[22793]: cluster 2026-03-09T20:24:15.444862+0000 mgr.a (mgr.14406) 68 : cluster [DBG] pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:16.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:16 vm04 bash[22793]: cluster 2026-03-09T20:24:15.444862+0000 mgr.a (mgr.14406) 68 : cluster [DBG] pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:16.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:16 vm03 bash[20708]: cluster 2026-03-09T20:24:15.444862+0000 mgr.a (mgr.14406) 68 : cluster [DBG] pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:16.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:16 vm03 bash[20708]: cluster 2026-03-09T20:24:15.444862+0000 mgr.a (mgr.14406) 68 : cluster [DBG] pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:18.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:18 vm08 bash[23232]: cluster 2026-03-09T20:24:17.445068+0000 mgr.a (mgr.14406) 69 : cluster [DBG] pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:18.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:18 vm08 bash[23232]: cluster 2026-03-09T20:24:17.445068+0000 mgr.a (mgr.14406) 69 : cluster [DBG] pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:18.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:18 vm04 bash[22793]: cluster 2026-03-09T20:24:17.445068+0000 mgr.a (mgr.14406) 69 : cluster [DBG] pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:18.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:18 vm04 bash[22793]: cluster 2026-03-09T20:24:17.445068+0000 mgr.a (mgr.14406) 69 : cluster [DBG] pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:18.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:18 vm03 bash[20708]: cluster 2026-03-09T20:24:17.445068+0000 mgr.a (mgr.14406) 69 : cluster [DBG] pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:18.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:18 vm03 bash[20708]: cluster 2026-03-09T20:24:17.445068+0000 mgr.a (mgr.14406) 69 : cluster [DBG] pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:20.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:20 vm08 bash[23232]: cluster 2026-03-09T20:24:19.445239+0000 mgr.a (mgr.14406) 70 : cluster [DBG] pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:20.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:20 vm08 bash[23232]: cluster 2026-03-09T20:24:19.445239+0000 mgr.a (mgr.14406) 70 : cluster [DBG] pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:20.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:20 vm04 bash[22793]: cluster 2026-03-09T20:24:19.445239+0000 mgr.a (mgr.14406) 70 : cluster [DBG] pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:20.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:20 vm04 bash[22793]: cluster 2026-03-09T20:24:19.445239+0000 mgr.a (mgr.14406) 70 : cluster [DBG] pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:20.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:20 vm03 bash[20708]: cluster 2026-03-09T20:24:19.445239+0000 mgr.a (mgr.14406) 70 : cluster [DBG] pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:20.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:20 vm03 bash[20708]: cluster 2026-03-09T20:24:19.445239+0000 mgr.a (mgr.14406) 70 : cluster [DBG] pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:22.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:22 vm08 bash[23232]: cluster 2026-03-09T20:24:21.445457+0000 mgr.a (mgr.14406) 71 : cluster [DBG] pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:22.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:22 vm08 bash[23232]: cluster 2026-03-09T20:24:21.445457+0000 mgr.a (mgr.14406) 71 : cluster [DBG] pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:22.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:22 vm04 bash[22793]: cluster 2026-03-09T20:24:21.445457+0000 mgr.a (mgr.14406) 71 : cluster [DBG] pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:22.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:22 vm04 bash[22793]: cluster 2026-03-09T20:24:21.445457+0000 mgr.a (mgr.14406) 71 : cluster [DBG] pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:22.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:22 vm03 bash[20708]: cluster 2026-03-09T20:24:21.445457+0000 mgr.a (mgr.14406) 71 : cluster [DBG] pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:22.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:22 vm03 bash[20708]: cluster 2026-03-09T20:24:21.445457+0000 mgr.a (mgr.14406) 71 : cluster [DBG] pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:23.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:23 vm08 bash[23232]: audit 2026-03-09T20:24:23.483567+0000 mon.b (mon.2) 47 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:23.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:23 vm08 bash[23232]: audit 2026-03-09T20:24:23.483567+0000 mon.b (mon.2) 47 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:23 vm04 bash[22793]: audit 2026-03-09T20:24:23.483567+0000 mon.b (mon.2) 47 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:23 vm04 bash[22793]: audit 2026-03-09T20:24:23.483567+0000 mon.b (mon.2) 47 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:23 vm03 bash[20708]: audit 2026-03-09T20:24:23.483567+0000 mon.b (mon.2) 47 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:23 vm03 bash[20708]: audit 2026-03-09T20:24:23.483567+0000 mon.b (mon.2) 47 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:24.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:24 vm08 bash[23232]: cluster 2026-03-09T20:24:23.445697+0000 mgr.a (mgr.14406) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:24.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:24 vm08 bash[23232]: cluster 2026-03-09T20:24:23.445697+0000 mgr.a (mgr.14406) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:24.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:24 vm04 bash[22793]: cluster 2026-03-09T20:24:23.445697+0000 mgr.a (mgr.14406) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:24.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:24 vm04 bash[22793]: cluster 2026-03-09T20:24:23.445697+0000 mgr.a (mgr.14406) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:24.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:24 vm03 bash[20708]: cluster 2026-03-09T20:24:23.445697+0000 mgr.a (mgr.14406) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:24.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:24 vm03 bash[20708]: cluster 2026-03-09T20:24:23.445697+0000 mgr.a (mgr.14406) 72 : cluster [DBG] pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:25.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:25 vm04 bash[22793]: cluster 2026-03-09T20:24:25.445893+0000 mgr.a (mgr.14406) 73 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:25.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:25 vm04 bash[22793]: cluster 2026-03-09T20:24:25.445893+0000 mgr.a (mgr.14406) 73 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:25.883 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:25 vm03 bash[20708]: cluster 2026-03-09T20:24:25.445893+0000 mgr.a (mgr.14406) 73 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:25.883 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:25 vm03 bash[20708]: cluster 2026-03-09T20:24:25.445893+0000 mgr.a (mgr.14406) 73 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:26.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:25 vm08 bash[23232]: cluster 2026-03-09T20:24:25.445893+0000 mgr.a (mgr.14406) 73 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:26.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:25 vm08 bash[23232]: cluster 2026-03-09T20:24:25.445893+0000 mgr.a (mgr.14406) 73 : cluster [DBG] pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:26.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:24:25 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:24:25] "GET /metrics HTTP/1.1" 200 21328 "" "Prometheus/2.51.0" 2026-03-09T20:24:28.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:28 vm08 bash[23232]: cluster 2026-03-09T20:24:27.446127+0000 mgr.a (mgr.14406) 74 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:28.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:28 vm08 bash[23232]: cluster 2026-03-09T20:24:27.446127+0000 mgr.a (mgr.14406) 74 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:28.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:28 vm04 bash[22793]: cluster 2026-03-09T20:24:27.446127+0000 mgr.a (mgr.14406) 74 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:28.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:28 vm04 bash[22793]: cluster 2026-03-09T20:24:27.446127+0000 mgr.a (mgr.14406) 74 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:28.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:28 vm03 bash[20708]: cluster 2026-03-09T20:24:27.446127+0000 mgr.a (mgr.14406) 74 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:28.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:28 vm03 bash[20708]: cluster 2026-03-09T20:24:27.446127+0000 mgr.a (mgr.14406) 74 : cluster [DBG] pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:30.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:30 vm08 bash[23232]: cluster 2026-03-09T20:24:29.446346+0000 mgr.a (mgr.14406) 75 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:30.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:30 vm08 bash[23232]: cluster 2026-03-09T20:24:29.446346+0000 mgr.a (mgr.14406) 75 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:30.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:30 vm04 bash[22793]: cluster 2026-03-09T20:24:29.446346+0000 mgr.a (mgr.14406) 75 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:30.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:30 vm04 bash[22793]: cluster 2026-03-09T20:24:29.446346+0000 mgr.a (mgr.14406) 75 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:30.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:30 vm03 bash[20708]: cluster 2026-03-09T20:24:29.446346+0000 mgr.a (mgr.14406) 75 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:30.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:30 vm03 bash[20708]: cluster 2026-03-09T20:24:29.446346+0000 mgr.a (mgr.14406) 75 : cluster [DBG] pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:32.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:32 vm08 bash[23232]: cluster 2026-03-09T20:24:31.446567+0000 mgr.a (mgr.14406) 76 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:32.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:32 vm08 bash[23232]: cluster 2026-03-09T20:24:31.446567+0000 mgr.a (mgr.14406) 76 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:32.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:32 vm04 bash[22793]: cluster 2026-03-09T20:24:31.446567+0000 mgr.a (mgr.14406) 76 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:32.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:32 vm04 bash[22793]: cluster 2026-03-09T20:24:31.446567+0000 mgr.a (mgr.14406) 76 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:32 vm03 bash[20708]: cluster 2026-03-09T20:24:31.446567+0000 mgr.a (mgr.14406) 76 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:32.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:32 vm03 bash[20708]: cluster 2026-03-09T20:24:31.446567+0000 mgr.a (mgr.14406) 76 : cluster [DBG] pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:34.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:34 vm08 bash[23232]: cluster 2026-03-09T20:24:33.446821+0000 mgr.a (mgr.14406) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:34.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:34 vm08 bash[23232]: cluster 2026-03-09T20:24:33.446821+0000 mgr.a (mgr.14406) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:34.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:34 vm04 bash[22793]: cluster 2026-03-09T20:24:33.446821+0000 mgr.a (mgr.14406) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:34.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:34 vm04 bash[22793]: cluster 2026-03-09T20:24:33.446821+0000 mgr.a (mgr.14406) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:34.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:34 vm03 bash[20708]: cluster 2026-03-09T20:24:33.446821+0000 mgr.a (mgr.14406) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:34.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:34 vm03 bash[20708]: cluster 2026-03-09T20:24:33.446821+0000 mgr.a (mgr.14406) 77 : cluster [DBG] pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:36.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:24:35 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:24:35] "GET /metrics HTTP/1.1" 200 21328 "" "Prometheus/2.51.0" 2026-03-09T20:24:36.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:36 vm08 bash[23232]: cluster 2026-03-09T20:24:35.447087+0000 mgr.a (mgr.14406) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:36.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:36 vm08 bash[23232]: cluster 2026-03-09T20:24:35.447087+0000 mgr.a (mgr.14406) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:36.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:36 vm04 bash[22793]: cluster 2026-03-09T20:24:35.447087+0000 mgr.a (mgr.14406) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:36.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:36 vm04 bash[22793]: cluster 2026-03-09T20:24:35.447087+0000 mgr.a (mgr.14406) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:36.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:36 vm03 bash[20708]: cluster 2026-03-09T20:24:35.447087+0000 mgr.a (mgr.14406) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:36.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:36 vm03 bash[20708]: cluster 2026-03-09T20:24:35.447087+0000 mgr.a (mgr.14406) 78 : cluster [DBG] pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:38.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:38 vm08 bash[23232]: cluster 2026-03-09T20:24:37.447293+0000 mgr.a (mgr.14406) 79 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:38.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:38 vm08 bash[23232]: cluster 2026-03-09T20:24:37.447293+0000 mgr.a (mgr.14406) 79 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:38.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:38 vm08 bash[23232]: audit 2026-03-09T20:24:38.483737+0000 mon.b (mon.2) 48 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:38.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:38 vm08 bash[23232]: audit 2026-03-09T20:24:38.483737+0000 mon.b (mon.2) 48 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:38.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:38 vm04 bash[22793]: cluster 2026-03-09T20:24:37.447293+0000 mgr.a (mgr.14406) 79 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:38.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:38 vm04 bash[22793]: cluster 2026-03-09T20:24:37.447293+0000 mgr.a (mgr.14406) 79 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:38.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:38 vm04 bash[22793]: audit 2026-03-09T20:24:38.483737+0000 mon.b (mon.2) 48 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:38.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:38 vm04 bash[22793]: audit 2026-03-09T20:24:38.483737+0000 mon.b (mon.2) 48 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:38.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:38 vm03 bash[20708]: cluster 2026-03-09T20:24:37.447293+0000 mgr.a (mgr.14406) 79 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:38.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:38 vm03 bash[20708]: cluster 2026-03-09T20:24:37.447293+0000 mgr.a (mgr.14406) 79 : cluster [DBG] pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:38.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:38 vm03 bash[20708]: audit 2026-03-09T20:24:38.483737+0000 mon.b (mon.2) 48 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:38.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:38 vm03 bash[20708]: audit 2026-03-09T20:24:38.483737+0000 mon.b (mon.2) 48 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:40.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:40 vm08 bash[23232]: cluster 2026-03-09T20:24:39.447471+0000 mgr.a (mgr.14406) 80 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:40.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:40 vm08 bash[23232]: cluster 2026-03-09T20:24:39.447471+0000 mgr.a (mgr.14406) 80 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:40.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:40 vm04 bash[22793]: cluster 2026-03-09T20:24:39.447471+0000 mgr.a (mgr.14406) 80 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:40.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:40 vm04 bash[22793]: cluster 2026-03-09T20:24:39.447471+0000 mgr.a (mgr.14406) 80 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:40.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:40 vm03 bash[20708]: cluster 2026-03-09T20:24:39.447471+0000 mgr.a (mgr.14406) 80 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:40.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:40 vm03 bash[20708]: cluster 2026-03-09T20:24:39.447471+0000 mgr.a (mgr.14406) 80 : cluster [DBG] pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:42.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:42 vm08 bash[23232]: cluster 2026-03-09T20:24:41.447664+0000 mgr.a (mgr.14406) 81 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:42.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:42 vm08 bash[23232]: cluster 2026-03-09T20:24:41.447664+0000 mgr.a (mgr.14406) 81 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:42.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:42 vm04 bash[22793]: cluster 2026-03-09T20:24:41.447664+0000 mgr.a (mgr.14406) 81 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:42.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:42 vm04 bash[22793]: cluster 2026-03-09T20:24:41.447664+0000 mgr.a (mgr.14406) 81 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:42.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:42 vm03 bash[20708]: cluster 2026-03-09T20:24:41.447664+0000 mgr.a (mgr.14406) 81 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:42.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:42 vm03 bash[20708]: cluster 2026-03-09T20:24:41.447664+0000 mgr.a (mgr.14406) 81 : cluster [DBG] pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:44.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:44 vm08 bash[23232]: cluster 2026-03-09T20:24:43.447869+0000 mgr.a (mgr.14406) 82 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:44.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:44 vm08 bash[23232]: cluster 2026-03-09T20:24:43.447869+0000 mgr.a (mgr.14406) 82 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:44 vm04 bash[22793]: cluster 2026-03-09T20:24:43.447869+0000 mgr.a (mgr.14406) 82 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:44 vm04 bash[22793]: cluster 2026-03-09T20:24:43.447869+0000 mgr.a (mgr.14406) 82 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:44.906 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:44 vm03 bash[20708]: cluster 2026-03-09T20:24:43.447869+0000 mgr.a (mgr.14406) 82 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:44 vm03 bash[20708]: cluster 2026-03-09T20:24:43.447869+0000 mgr.a (mgr.14406) 82 : cluster [DBG] pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:45.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:45 vm04 bash[22793]: cluster 2026-03-09T20:24:45.448115+0000 mgr.a (mgr.14406) 83 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:45.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:45 vm04 bash[22793]: cluster 2026-03-09T20:24:45.448115+0000 mgr.a (mgr.14406) 83 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:45.882 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:45 vm03 bash[20708]: cluster 2026-03-09T20:24:45.448115+0000 mgr.a (mgr.14406) 83 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:45.882 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:45 vm03 bash[20708]: cluster 2026-03-09T20:24:45.448115+0000 mgr.a (mgr.14406) 83 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:46.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:45 vm08 bash[23232]: cluster 2026-03-09T20:24:45.448115+0000 mgr.a (mgr.14406) 83 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:46.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:45 vm08 bash[23232]: cluster 2026-03-09T20:24:45.448115+0000 mgr.a (mgr.14406) 83 : cluster [DBG] pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:46.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:24:45 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:24:45] "GET /metrics HTTP/1.1" 200 21328 "" "Prometheus/2.51.0" 2026-03-09T20:24:48.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:48 vm08 bash[23232]: cluster 2026-03-09T20:24:47.448314+0000 mgr.a (mgr.14406) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:48.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:48 vm08 bash[23232]: cluster 2026-03-09T20:24:47.448314+0000 mgr.a (mgr.14406) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:48.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:48 vm04 bash[22793]: cluster 2026-03-09T20:24:47.448314+0000 mgr.a (mgr.14406) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:48.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:48 vm04 bash[22793]: cluster 2026-03-09T20:24:47.448314+0000 mgr.a (mgr.14406) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:48.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:48 vm03 bash[20708]: cluster 2026-03-09T20:24:47.448314+0000 mgr.a (mgr.14406) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:48.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:48 vm03 bash[20708]: cluster 2026-03-09T20:24:47.448314+0000 mgr.a (mgr.14406) 84 : cluster [DBG] pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:50.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:50 vm08 bash[23232]: cluster 2026-03-09T20:24:49.448534+0000 mgr.a (mgr.14406) 85 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:50.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:50 vm08 bash[23232]: cluster 2026-03-09T20:24:49.448534+0000 mgr.a (mgr.14406) 85 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:50.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:50 vm04 bash[22793]: cluster 2026-03-09T20:24:49.448534+0000 mgr.a (mgr.14406) 85 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:50.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:50 vm04 bash[22793]: cluster 2026-03-09T20:24:49.448534+0000 mgr.a (mgr.14406) 85 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:50.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:50 vm03 bash[20708]: cluster 2026-03-09T20:24:49.448534+0000 mgr.a (mgr.14406) 85 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:50.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:50 vm03 bash[20708]: cluster 2026-03-09T20:24:49.448534+0000 mgr.a (mgr.14406) 85 : cluster [DBG] pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:52.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:52 vm08 bash[23232]: cluster 2026-03-09T20:24:51.448729+0000 mgr.a (mgr.14406) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:52.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:52 vm08 bash[23232]: cluster 2026-03-09T20:24:51.448729+0000 mgr.a (mgr.14406) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:52.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:52 vm04 bash[22793]: cluster 2026-03-09T20:24:51.448729+0000 mgr.a (mgr.14406) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:52.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:52 vm04 bash[22793]: cluster 2026-03-09T20:24:51.448729+0000 mgr.a (mgr.14406) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:52.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:52 vm03 bash[20708]: cluster 2026-03-09T20:24:51.448729+0000 mgr.a (mgr.14406) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:52.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:52 vm03 bash[20708]: cluster 2026-03-09T20:24:51.448729+0000 mgr.a (mgr.14406) 86 : cluster [DBG] pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:53.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:53 vm08 bash[23232]: audit 2026-03-09T20:24:53.484265+0000 mon.b (mon.2) 49 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:53.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:53 vm08 bash[23232]: audit 2026-03-09T20:24:53.484265+0000 mon.b (mon.2) 49 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:53.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:53 vm04 bash[22793]: audit 2026-03-09T20:24:53.484265+0000 mon.b (mon.2) 49 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:53.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:53 vm04 bash[22793]: audit 2026-03-09T20:24:53.484265+0000 mon.b (mon.2) 49 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:53.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:53 vm03 bash[20708]: audit 2026-03-09T20:24:53.484265+0000 mon.b (mon.2) 49 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:53.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:53 vm03 bash[20708]: audit 2026-03-09T20:24:53.484265+0000 mon.b (mon.2) 49 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:24:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:54 vm08 bash[23232]: cluster 2026-03-09T20:24:53.449198+0000 mgr.a (mgr.14406) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:54.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:54 vm08 bash[23232]: cluster 2026-03-09T20:24:53.449198+0000 mgr.a (mgr.14406) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:54.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:54 vm04 bash[22793]: cluster 2026-03-09T20:24:53.449198+0000 mgr.a (mgr.14406) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:54.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:54 vm04 bash[22793]: cluster 2026-03-09T20:24:53.449198+0000 mgr.a (mgr.14406) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:54 vm03 bash[20708]: cluster 2026-03-09T20:24:53.449198+0000 mgr.a (mgr.14406) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:54 vm03 bash[20708]: cluster 2026-03-09T20:24:53.449198+0000 mgr.a (mgr.14406) 87 : cluster [DBG] pgmap v63: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:56.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:24:55 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:24:55] "GET /metrics HTTP/1.1" 200 21328 "" "Prometheus/2.51.0" 2026-03-09T20:24:56.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:56 vm08 bash[23232]: cluster 2026-03-09T20:24:55.449422+0000 mgr.a (mgr.14406) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:56.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:56 vm08 bash[23232]: cluster 2026-03-09T20:24:55.449422+0000 mgr.a (mgr.14406) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:56.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:56 vm04 bash[22793]: cluster 2026-03-09T20:24:55.449422+0000 mgr.a (mgr.14406) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:56.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:56 vm04 bash[22793]: cluster 2026-03-09T20:24:55.449422+0000 mgr.a (mgr.14406) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:56 vm03 bash[20708]: cluster 2026-03-09T20:24:55.449422+0000 mgr.a (mgr.14406) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:56 vm03 bash[20708]: cluster 2026-03-09T20:24:55.449422+0000 mgr.a (mgr.14406) 88 : cluster [DBG] pgmap v64: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:58.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:58 vm08 bash[23232]: cluster 2026-03-09T20:24:57.449618+0000 mgr.a (mgr.14406) 89 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:58.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:24:58 vm08 bash[23232]: cluster 2026-03-09T20:24:57.449618+0000 mgr.a (mgr.14406) 89 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:58.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:58 vm04 bash[22793]: cluster 2026-03-09T20:24:57.449618+0000 mgr.a (mgr.14406) 89 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:58.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:24:58 vm04 bash[22793]: cluster 2026-03-09T20:24:57.449618+0000 mgr.a (mgr.14406) 89 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:58 vm03 bash[20708]: cluster 2026-03-09T20:24:57.449618+0000 mgr.a (mgr.14406) 89 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:24:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:24:58 vm03 bash[20708]: cluster 2026-03-09T20:24:57.449618+0000 mgr.a (mgr.14406) 89 : cluster [DBG] pgmap v65: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:00.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:00 vm04 bash[22793]: cluster 2026-03-09T20:24:59.449795+0000 mgr.a (mgr.14406) 90 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:00.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:00 vm04 bash[22793]: cluster 2026-03-09T20:24:59.449795+0000 mgr.a (mgr.14406) 90 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:00 vm03 bash[20708]: cluster 2026-03-09T20:24:59.449795+0000 mgr.a (mgr.14406) 90 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:00 vm03 bash[20708]: cluster 2026-03-09T20:24:59.449795+0000 mgr.a (mgr.14406) 90 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:01.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:00 vm08 bash[23232]: cluster 2026-03-09T20:24:59.449795+0000 mgr.a (mgr.14406) 90 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:01.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:00 vm08 bash[23232]: cluster 2026-03-09T20:24:59.449795+0000 mgr.a (mgr.14406) 90 : cluster [DBG] pgmap v66: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:01.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:01 vm03 bash[20708]: cluster 2026-03-09T20:25:01.449993+0000 mgr.a (mgr.14406) 91 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:01.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:01 vm03 bash[20708]: cluster 2026-03-09T20:25:01.449993+0000 mgr.a (mgr.14406) 91 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:01.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:01 vm08 bash[23232]: cluster 2026-03-09T20:25:01.449993+0000 mgr.a (mgr.14406) 91 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:01.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:01 vm08 bash[23232]: cluster 2026-03-09T20:25:01.449993+0000 mgr.a (mgr.14406) 91 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:01.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:01 vm04 bash[22793]: cluster 2026-03-09T20:25:01.449993+0000 mgr.a (mgr.14406) 91 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:01.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:01 vm04 bash[22793]: cluster 2026-03-09T20:25:01.449993+0000 mgr.a (mgr.14406) 91 : cluster [DBG] pgmap v67: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:04.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:04 vm08 bash[23232]: cluster 2026-03-09T20:25:03.450178+0000 mgr.a (mgr.14406) 92 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:04.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:04 vm08 bash[23232]: cluster 2026-03-09T20:25:03.450178+0000 mgr.a (mgr.14406) 92 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:04.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:04 vm04 bash[22793]: cluster 2026-03-09T20:25:03.450178+0000 mgr.a (mgr.14406) 92 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:04.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:04 vm04 bash[22793]: cluster 2026-03-09T20:25:03.450178+0000 mgr.a (mgr.14406) 92 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:04.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:04 vm03 bash[20708]: cluster 2026-03-09T20:25:03.450178+0000 mgr.a (mgr.14406) 92 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:04.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:04 vm03 bash[20708]: cluster 2026-03-09T20:25:03.450178+0000 mgr.a (mgr.14406) 92 : cluster [DBG] pgmap v68: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:06.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:25:05 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:25:05] "GET /metrics HTTP/1.1" 200 21328 "" "Prometheus/2.51.0" 2026-03-09T20:25:06.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:06 vm08 bash[23232]: cluster 2026-03-09T20:25:05.450372+0000 mgr.a (mgr.14406) 93 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:06.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:06 vm08 bash[23232]: cluster 2026-03-09T20:25:05.450372+0000 mgr.a (mgr.14406) 93 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:06.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:06 vm08 bash[23232]: audit 2026-03-09T20:25:06.511836+0000 mon.b (mon.2) 50 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:25:06.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:06 vm08 bash[23232]: audit 2026-03-09T20:25:06.511836+0000 mon.b (mon.2) 50 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:25:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:06 vm04 bash[22793]: cluster 2026-03-09T20:25:05.450372+0000 mgr.a (mgr.14406) 93 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:06 vm04 bash[22793]: cluster 2026-03-09T20:25:05.450372+0000 mgr.a (mgr.14406) 93 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:06 vm04 bash[22793]: audit 2026-03-09T20:25:06.511836+0000 mon.b (mon.2) 50 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:25:06.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:06 vm04 bash[22793]: audit 2026-03-09T20:25:06.511836+0000 mon.b (mon.2) 50 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:25:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:06 vm03 bash[20708]: cluster 2026-03-09T20:25:05.450372+0000 mgr.a (mgr.14406) 93 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:06 vm03 bash[20708]: cluster 2026-03-09T20:25:05.450372+0000 mgr.a (mgr.14406) 93 : cluster [DBG] pgmap v69: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:06 vm03 bash[20708]: audit 2026-03-09T20:25:06.511836+0000 mon.b (mon.2) 50 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:25:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:06 vm03 bash[20708]: audit 2026-03-09T20:25:06.511836+0000 mon.b (mon.2) 50 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:25:07.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:07 vm08 bash[23232]: audit 2026-03-09T20:25:06.860611+0000 mon.b (mon.2) 51 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:25:07.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:07 vm08 bash[23232]: audit 2026-03-09T20:25:06.860611+0000 mon.b (mon.2) 51 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:25:07.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:07 vm08 bash[23232]: audit 2026-03-09T20:25:06.861938+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:25:07.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:07 vm08 bash[23232]: audit 2026-03-09T20:25:06.861938+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:25:07.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:07 vm08 bash[23232]: audit 2026-03-09T20:25:06.867053+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:25:07.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:07 vm08 bash[23232]: audit 2026-03-09T20:25:06.867053+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:25:07.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:07 vm04 bash[22793]: audit 2026-03-09T20:25:06.860611+0000 mon.b (mon.2) 51 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:25:07.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:07 vm04 bash[22793]: audit 2026-03-09T20:25:06.860611+0000 mon.b (mon.2) 51 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:25:07.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:07 vm04 bash[22793]: audit 2026-03-09T20:25:06.861938+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:25:07.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:07 vm04 bash[22793]: audit 2026-03-09T20:25:06.861938+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:25:07.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:07 vm04 bash[22793]: audit 2026-03-09T20:25:06.867053+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:25:07.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:07 vm04 bash[22793]: audit 2026-03-09T20:25:06.867053+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:25:07.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:07 vm03 bash[20708]: audit 2026-03-09T20:25:06.860611+0000 mon.b (mon.2) 51 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:25:07.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:07 vm03 bash[20708]: audit 2026-03-09T20:25:06.860611+0000 mon.b (mon.2) 51 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:25:07.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:07 vm03 bash[20708]: audit 2026-03-09T20:25:06.861938+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:25:07.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:07 vm03 bash[20708]: audit 2026-03-09T20:25:06.861938+0000 mon.b (mon.2) 52 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:25:07.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:07 vm03 bash[20708]: audit 2026-03-09T20:25:06.867053+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:25:07.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:07 vm03 bash[20708]: audit 2026-03-09T20:25:06.867053+0000 mon.a (mon.0) 536 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:25:08.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:08 vm08 bash[23232]: cluster 2026-03-09T20:25:07.450615+0000 mgr.a (mgr.14406) 94 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:08.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:08 vm08 bash[23232]: cluster 2026-03-09T20:25:07.450615+0000 mgr.a (mgr.14406) 94 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:08.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:08 vm08 bash[23232]: audit 2026-03-09T20:25:08.484470+0000 mon.b (mon.2) 53 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:08.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:08 vm08 bash[23232]: audit 2026-03-09T20:25:08.484470+0000 mon.b (mon.2) 53 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:08 vm04 bash[22793]: cluster 2026-03-09T20:25:07.450615+0000 mgr.a (mgr.14406) 94 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:08 vm04 bash[22793]: cluster 2026-03-09T20:25:07.450615+0000 mgr.a (mgr.14406) 94 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:08 vm04 bash[22793]: audit 2026-03-09T20:25:08.484470+0000 mon.b (mon.2) 53 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:08 vm04 bash[22793]: audit 2026-03-09T20:25:08.484470+0000 mon.b (mon.2) 53 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:08 vm03 bash[20708]: cluster 2026-03-09T20:25:07.450615+0000 mgr.a (mgr.14406) 94 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:08 vm03 bash[20708]: cluster 2026-03-09T20:25:07.450615+0000 mgr.a (mgr.14406) 94 : cluster [DBG] pgmap v70: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:08 vm03 bash[20708]: audit 2026-03-09T20:25:08.484470+0000 mon.b (mon.2) 53 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:08 vm03 bash[20708]: audit 2026-03-09T20:25:08.484470+0000 mon.b (mon.2) 53 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:10.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:10 vm08 bash[23232]: cluster 2026-03-09T20:25:09.450850+0000 mgr.a (mgr.14406) 95 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:10.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:10 vm08 bash[23232]: cluster 2026-03-09T20:25:09.450850+0000 mgr.a (mgr.14406) 95 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:10.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:10 vm04 bash[22793]: cluster 2026-03-09T20:25:09.450850+0000 mgr.a (mgr.14406) 95 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:10.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:10 vm04 bash[22793]: cluster 2026-03-09T20:25:09.450850+0000 mgr.a (mgr.14406) 95 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:10.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:10 vm03 bash[20708]: cluster 2026-03-09T20:25:09.450850+0000 mgr.a (mgr.14406) 95 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:10.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:10 vm03 bash[20708]: cluster 2026-03-09T20:25:09.450850+0000 mgr.a (mgr.14406) 95 : cluster [DBG] pgmap v71: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:12.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:12 vm08 bash[23232]: cluster 2026-03-09T20:25:11.451093+0000 mgr.a (mgr.14406) 96 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:12.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:12 vm08 bash[23232]: cluster 2026-03-09T20:25:11.451093+0000 mgr.a (mgr.14406) 96 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:12.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:12 vm04 bash[22793]: cluster 2026-03-09T20:25:11.451093+0000 mgr.a (mgr.14406) 96 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:12.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:12 vm04 bash[22793]: cluster 2026-03-09T20:25:11.451093+0000 mgr.a (mgr.14406) 96 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:12.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:12 vm03 bash[20708]: cluster 2026-03-09T20:25:11.451093+0000 mgr.a (mgr.14406) 96 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:12.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:12 vm03 bash[20708]: cluster 2026-03-09T20:25:11.451093+0000 mgr.a (mgr.14406) 96 : cluster [DBG] pgmap v72: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:14.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:14 vm08 bash[23232]: cluster 2026-03-09T20:25:13.451293+0000 mgr.a (mgr.14406) 97 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:14.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:14 vm08 bash[23232]: cluster 2026-03-09T20:25:13.451293+0000 mgr.a (mgr.14406) 97 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:14.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:14 vm04 bash[22793]: cluster 2026-03-09T20:25:13.451293+0000 mgr.a (mgr.14406) 97 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:14.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:14 vm04 bash[22793]: cluster 2026-03-09T20:25:13.451293+0000 mgr.a (mgr.14406) 97 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:14.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:14 vm03 bash[20708]: cluster 2026-03-09T20:25:13.451293+0000 mgr.a (mgr.14406) 97 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:14.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:14 vm03 bash[20708]: cluster 2026-03-09T20:25:13.451293+0000 mgr.a (mgr.14406) 97 : cluster [DBG] pgmap v73: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:16.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:25:15 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:25:15] "GET /metrics HTTP/1.1" 200 21330 "" "Prometheus/2.51.0" 2026-03-09T20:25:16.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:16 vm08 bash[23232]: cluster 2026-03-09T20:25:15.451469+0000 mgr.a (mgr.14406) 98 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:16.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:16 vm08 bash[23232]: cluster 2026-03-09T20:25:15.451469+0000 mgr.a (mgr.14406) 98 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:16.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:16 vm04 bash[22793]: cluster 2026-03-09T20:25:15.451469+0000 mgr.a (mgr.14406) 98 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:16.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:16 vm04 bash[22793]: cluster 2026-03-09T20:25:15.451469+0000 mgr.a (mgr.14406) 98 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:16.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:16 vm03 bash[20708]: cluster 2026-03-09T20:25:15.451469+0000 mgr.a (mgr.14406) 98 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:16.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:16 vm03 bash[20708]: cluster 2026-03-09T20:25:15.451469+0000 mgr.a (mgr.14406) 98 : cluster [DBG] pgmap v74: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:18.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:18 vm08 bash[23232]: cluster 2026-03-09T20:25:17.451638+0000 mgr.a (mgr.14406) 99 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:18.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:18 vm08 bash[23232]: cluster 2026-03-09T20:25:17.451638+0000 mgr.a (mgr.14406) 99 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:18.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:18 vm04 bash[22793]: cluster 2026-03-09T20:25:17.451638+0000 mgr.a (mgr.14406) 99 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:18.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:18 vm04 bash[22793]: cluster 2026-03-09T20:25:17.451638+0000 mgr.a (mgr.14406) 99 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:18.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:18 vm03 bash[20708]: cluster 2026-03-09T20:25:17.451638+0000 mgr.a (mgr.14406) 99 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:18.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:18 vm03 bash[20708]: cluster 2026-03-09T20:25:17.451638+0000 mgr.a (mgr.14406) 99 : cluster [DBG] pgmap v75: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:20.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:20 vm04 bash[22793]: cluster 2026-03-09T20:25:19.451809+0000 mgr.a (mgr.14406) 100 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:20.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:20 vm04 bash[22793]: cluster 2026-03-09T20:25:19.451809+0000 mgr.a (mgr.14406) 100 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:20.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:20 vm03 bash[20708]: cluster 2026-03-09T20:25:19.451809+0000 mgr.a (mgr.14406) 100 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:20.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:20 vm03 bash[20708]: cluster 2026-03-09T20:25:19.451809+0000 mgr.a (mgr.14406) 100 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:21.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:20 vm08 bash[23232]: cluster 2026-03-09T20:25:19.451809+0000 mgr.a (mgr.14406) 100 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:21.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:20 vm08 bash[23232]: cluster 2026-03-09T20:25:19.451809+0000 mgr.a (mgr.14406) 100 : cluster [DBG] pgmap v76: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:21.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:21 vm04 bash[22793]: cluster 2026-03-09T20:25:21.451984+0000 mgr.a (mgr.14406) 101 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:21.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:21 vm04 bash[22793]: cluster 2026-03-09T20:25:21.451984+0000 mgr.a (mgr.14406) 101 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:21.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:21 vm03 bash[20708]: cluster 2026-03-09T20:25:21.451984+0000 mgr.a (mgr.14406) 101 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:21.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:21 vm03 bash[20708]: cluster 2026-03-09T20:25:21.451984+0000 mgr.a (mgr.14406) 101 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:22.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:21 vm08 bash[23232]: cluster 2026-03-09T20:25:21.451984+0000 mgr.a (mgr.14406) 101 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:22.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:21 vm08 bash[23232]: cluster 2026-03-09T20:25:21.451984+0000 mgr.a (mgr.14406) 101 : cluster [DBG] pgmap v77: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:23.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:23 vm08 bash[23232]: audit 2026-03-09T20:25:23.484709+0000 mon.b (mon.2) 54 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:23.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:23 vm08 bash[23232]: audit 2026-03-09T20:25:23.484709+0000 mon.b (mon.2) 54 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:23 vm04 bash[22793]: audit 2026-03-09T20:25:23.484709+0000 mon.b (mon.2) 54 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:23 vm04 bash[22793]: audit 2026-03-09T20:25:23.484709+0000 mon.b (mon.2) 54 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:23 vm03 bash[20708]: audit 2026-03-09T20:25:23.484709+0000 mon.b (mon.2) 54 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:23 vm03 bash[20708]: audit 2026-03-09T20:25:23.484709+0000 mon.b (mon.2) 54 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:24.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:24 vm08 bash[23232]: cluster 2026-03-09T20:25:23.452166+0000 mgr.a (mgr.14406) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:24.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:24 vm08 bash[23232]: cluster 2026-03-09T20:25:23.452166+0000 mgr.a (mgr.14406) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:24.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:24 vm04 bash[22793]: cluster 2026-03-09T20:25:23.452166+0000 mgr.a (mgr.14406) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:24.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:24 vm04 bash[22793]: cluster 2026-03-09T20:25:23.452166+0000 mgr.a (mgr.14406) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:24.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:24 vm03 bash[20708]: cluster 2026-03-09T20:25:23.452166+0000 mgr.a (mgr.14406) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:24.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:24 vm03 bash[20708]: cluster 2026-03-09T20:25:23.452166+0000 mgr.a (mgr.14406) 102 : cluster [DBG] pgmap v78: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:26.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:25:25 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:25:25] "GET /metrics HTTP/1.1" 200 21335 "" "Prometheus/2.51.0" 2026-03-09T20:25:26.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:26 vm08 bash[23232]: cluster 2026-03-09T20:25:25.452365+0000 mgr.a (mgr.14406) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:26.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:26 vm08 bash[23232]: cluster 2026-03-09T20:25:25.452365+0000 mgr.a (mgr.14406) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:26.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:26 vm04 bash[22793]: cluster 2026-03-09T20:25:25.452365+0000 mgr.a (mgr.14406) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:26.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:26 vm04 bash[22793]: cluster 2026-03-09T20:25:25.452365+0000 mgr.a (mgr.14406) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:26.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:26 vm03 bash[20708]: cluster 2026-03-09T20:25:25.452365+0000 mgr.a (mgr.14406) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:26.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:26 vm03 bash[20708]: cluster 2026-03-09T20:25:25.452365+0000 mgr.a (mgr.14406) 103 : cluster [DBG] pgmap v79: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:28.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:28 vm08 bash[23232]: cluster 2026-03-09T20:25:27.452581+0000 mgr.a (mgr.14406) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:28.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:28 vm08 bash[23232]: cluster 2026-03-09T20:25:27.452581+0000 mgr.a (mgr.14406) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:28.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:28 vm04 bash[22793]: cluster 2026-03-09T20:25:27.452581+0000 mgr.a (mgr.14406) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:28.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:28 vm04 bash[22793]: cluster 2026-03-09T20:25:27.452581+0000 mgr.a (mgr.14406) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:28.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:28 vm03 bash[20708]: cluster 2026-03-09T20:25:27.452581+0000 mgr.a (mgr.14406) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:28.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:28 vm03 bash[20708]: cluster 2026-03-09T20:25:27.452581+0000 mgr.a (mgr.14406) 104 : cluster [DBG] pgmap v80: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:30.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:30 vm04 bash[22793]: cluster 2026-03-09T20:25:29.452837+0000 mgr.a (mgr.14406) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:30.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:30 vm04 bash[22793]: cluster 2026-03-09T20:25:29.452837+0000 mgr.a (mgr.14406) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:30.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:30 vm03 bash[20708]: cluster 2026-03-09T20:25:29.452837+0000 mgr.a (mgr.14406) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:30.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:30 vm03 bash[20708]: cluster 2026-03-09T20:25:29.452837+0000 mgr.a (mgr.14406) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:31.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:30 vm08 bash[23232]: cluster 2026-03-09T20:25:29.452837+0000 mgr.a (mgr.14406) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:31.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:30 vm08 bash[23232]: cluster 2026-03-09T20:25:29.452837+0000 mgr.a (mgr.14406) 105 : cluster [DBG] pgmap v81: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:31.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:31 vm04 bash[22793]: cluster 2026-03-09T20:25:31.453029+0000 mgr.a (mgr.14406) 106 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:31.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:31 vm04 bash[22793]: cluster 2026-03-09T20:25:31.453029+0000 mgr.a (mgr.14406) 106 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:31.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:31 vm03 bash[20708]: cluster 2026-03-09T20:25:31.453029+0000 mgr.a (mgr.14406) 106 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:31.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:31 vm03 bash[20708]: cluster 2026-03-09T20:25:31.453029+0000 mgr.a (mgr.14406) 106 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:32.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:31 vm08 bash[23232]: cluster 2026-03-09T20:25:31.453029+0000 mgr.a (mgr.14406) 106 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:32.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:31 vm08 bash[23232]: cluster 2026-03-09T20:25:31.453029+0000 mgr.a (mgr.14406) 106 : cluster [DBG] pgmap v82: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:34.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:34 vm08 bash[23232]: cluster 2026-03-09T20:25:33.453229+0000 mgr.a (mgr.14406) 107 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:34.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:34 vm08 bash[23232]: cluster 2026-03-09T20:25:33.453229+0000 mgr.a (mgr.14406) 107 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:34.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:34 vm04 bash[22793]: cluster 2026-03-09T20:25:33.453229+0000 mgr.a (mgr.14406) 107 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:34.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:34 vm04 bash[22793]: cluster 2026-03-09T20:25:33.453229+0000 mgr.a (mgr.14406) 107 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:34.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:34 vm03 bash[20708]: cluster 2026-03-09T20:25:33.453229+0000 mgr.a (mgr.14406) 107 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:34.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:34 vm03 bash[20708]: cluster 2026-03-09T20:25:33.453229+0000 mgr.a (mgr.14406) 107 : cluster [DBG] pgmap v83: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:36.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:25:35 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:25:35] "GET /metrics HTTP/1.1" 200 21335 "" "Prometheus/2.51.0" 2026-03-09T20:25:36.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:36 vm04 bash[22793]: cluster 2026-03-09T20:25:35.453441+0000 mgr.a (mgr.14406) 108 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:36.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:36 vm04 bash[22793]: cluster 2026-03-09T20:25:35.453441+0000 mgr.a (mgr.14406) 108 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:36.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:36 vm03 bash[20708]: cluster 2026-03-09T20:25:35.453441+0000 mgr.a (mgr.14406) 108 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:36.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:36 vm03 bash[20708]: cluster 2026-03-09T20:25:35.453441+0000 mgr.a (mgr.14406) 108 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:37.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:36 vm08 bash[23232]: cluster 2026-03-09T20:25:35.453441+0000 mgr.a (mgr.14406) 108 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:37.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:36 vm08 bash[23232]: cluster 2026-03-09T20:25:35.453441+0000 mgr.a (mgr.14406) 108 : cluster [DBG] pgmap v84: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:37.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:37 vm04 bash[22793]: cluster 2026-03-09T20:25:37.453696+0000 mgr.a (mgr.14406) 109 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:37.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:37 vm04 bash[22793]: cluster 2026-03-09T20:25:37.453696+0000 mgr.a (mgr.14406) 109 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:37.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:37 vm03 bash[20708]: cluster 2026-03-09T20:25:37.453696+0000 mgr.a (mgr.14406) 109 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:37.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:37 vm03 bash[20708]: cluster 2026-03-09T20:25:37.453696+0000 mgr.a (mgr.14406) 109 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:38.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:37 vm08 bash[23232]: cluster 2026-03-09T20:25:37.453696+0000 mgr.a (mgr.14406) 109 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:38.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:37 vm08 bash[23232]: cluster 2026-03-09T20:25:37.453696+0000 mgr.a (mgr.14406) 109 : cluster [DBG] pgmap v85: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:38.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:38 vm04 bash[22793]: audit 2026-03-09T20:25:38.485092+0000 mon.b (mon.2) 55 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:38.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:38 vm04 bash[22793]: audit 2026-03-09T20:25:38.485092+0000 mon.b (mon.2) 55 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:38.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:38 vm03 bash[20708]: audit 2026-03-09T20:25:38.485092+0000 mon.b (mon.2) 55 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:38.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:38 vm03 bash[20708]: audit 2026-03-09T20:25:38.485092+0000 mon.b (mon.2) 55 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:39.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:38 vm08 bash[23232]: audit 2026-03-09T20:25:38.485092+0000 mon.b (mon.2) 55 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:39.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:38 vm08 bash[23232]: audit 2026-03-09T20:25:38.485092+0000 mon.b (mon.2) 55 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:39.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:39 vm04 bash[22793]: cluster 2026-03-09T20:25:39.453876+0000 mgr.a (mgr.14406) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:39.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:39 vm04 bash[22793]: cluster 2026-03-09T20:25:39.453876+0000 mgr.a (mgr.14406) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:39 vm03 bash[20708]: cluster 2026-03-09T20:25:39.453876+0000 mgr.a (mgr.14406) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:39.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:39 vm03 bash[20708]: cluster 2026-03-09T20:25:39.453876+0000 mgr.a (mgr.14406) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:40.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:39 vm08 bash[23232]: cluster 2026-03-09T20:25:39.453876+0000 mgr.a (mgr.14406) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:40.058 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:39 vm08 bash[23232]: cluster 2026-03-09T20:25:39.453876+0000 mgr.a (mgr.14406) 110 : cluster [DBG] pgmap v86: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:42.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:42 vm08 bash[23232]: cluster 2026-03-09T20:25:41.454418+0000 mgr.a (mgr.14406) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:42.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:42 vm08 bash[23232]: cluster 2026-03-09T20:25:41.454418+0000 mgr.a (mgr.14406) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:42.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:42 vm04 bash[22793]: cluster 2026-03-09T20:25:41.454418+0000 mgr.a (mgr.14406) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:42.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:42 vm04 bash[22793]: cluster 2026-03-09T20:25:41.454418+0000 mgr.a (mgr.14406) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:42.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:42 vm03 bash[20708]: cluster 2026-03-09T20:25:41.454418+0000 mgr.a (mgr.14406) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:42.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:42 vm03 bash[20708]: cluster 2026-03-09T20:25:41.454418+0000 mgr.a (mgr.14406) 111 : cluster [DBG] pgmap v87: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:44.808 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:44 vm08 bash[23232]: cluster 2026-03-09T20:25:43.454607+0000 mgr.a (mgr.14406) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:44.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:44 vm08 bash[23232]: cluster 2026-03-09T20:25:43.454607+0000 mgr.a (mgr.14406) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:44 vm04 bash[22793]: cluster 2026-03-09T20:25:43.454607+0000 mgr.a (mgr.14406) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:44 vm04 bash[22793]: cluster 2026-03-09T20:25:43.454607+0000 mgr.a (mgr.14406) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:44 vm03 bash[20708]: cluster 2026-03-09T20:25:43.454607+0000 mgr.a (mgr.14406) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:44 vm03 bash[20708]: cluster 2026-03-09T20:25:43.454607+0000 mgr.a (mgr.14406) 112 : cluster [DBG] pgmap v88: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:46.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:25:45 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:25:45] "GET /metrics HTTP/1.1" 200 21335 "" "Prometheus/2.51.0" 2026-03-09T20:25:46.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:46 vm03 bash[20708]: cluster 2026-03-09T20:25:45.454796+0000 mgr.a (mgr.14406) 113 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:46.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:46 vm03 bash[20708]: cluster 2026-03-09T20:25:45.454796+0000 mgr.a (mgr.14406) 113 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:47.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:46 vm08 bash[23232]: cluster 2026-03-09T20:25:45.454796+0000 mgr.a (mgr.14406) 113 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:47.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:46 vm08 bash[23232]: cluster 2026-03-09T20:25:45.454796+0000 mgr.a (mgr.14406) 113 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:47.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:46 vm04 bash[22793]: cluster 2026-03-09T20:25:45.454796+0000 mgr.a (mgr.14406) 113 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:47.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:46 vm04 bash[22793]: cluster 2026-03-09T20:25:45.454796+0000 mgr.a (mgr.14406) 113 : cluster [DBG] pgmap v89: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:47.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:47 vm04 bash[22793]: cluster 2026-03-09T20:25:47.454999+0000 mgr.a (mgr.14406) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:47.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:47 vm04 bash[22793]: cluster 2026-03-09T20:25:47.454999+0000 mgr.a (mgr.14406) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:47.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:47 vm03 bash[20708]: cluster 2026-03-09T20:25:47.454999+0000 mgr.a (mgr.14406) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:47.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:47 vm03 bash[20708]: cluster 2026-03-09T20:25:47.454999+0000 mgr.a (mgr.14406) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:48.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:47 vm08 bash[23232]: cluster 2026-03-09T20:25:47.454999+0000 mgr.a (mgr.14406) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:48.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:47 vm08 bash[23232]: cluster 2026-03-09T20:25:47.454999+0000 mgr.a (mgr.14406) 114 : cluster [DBG] pgmap v90: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:50.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:50 vm08 bash[23232]: cluster 2026-03-09T20:25:49.455217+0000 mgr.a (mgr.14406) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:50.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:50 vm08 bash[23232]: cluster 2026-03-09T20:25:49.455217+0000 mgr.a (mgr.14406) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:50.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:50 vm04 bash[22793]: cluster 2026-03-09T20:25:49.455217+0000 mgr.a (mgr.14406) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:50.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:50 vm04 bash[22793]: cluster 2026-03-09T20:25:49.455217+0000 mgr.a (mgr.14406) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:50.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:50 vm03 bash[20708]: cluster 2026-03-09T20:25:49.455217+0000 mgr.a (mgr.14406) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:50.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:50 vm03 bash[20708]: cluster 2026-03-09T20:25:49.455217+0000 mgr.a (mgr.14406) 115 : cluster [DBG] pgmap v91: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:52.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:52 vm08 bash[23232]: cluster 2026-03-09T20:25:51.455550+0000 mgr.a (mgr.14406) 116 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:52.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:52 vm08 bash[23232]: cluster 2026-03-09T20:25:51.455550+0000 mgr.a (mgr.14406) 116 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:52.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:52 vm04 bash[22793]: cluster 2026-03-09T20:25:51.455550+0000 mgr.a (mgr.14406) 116 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:52.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:52 vm04 bash[22793]: cluster 2026-03-09T20:25:51.455550+0000 mgr.a (mgr.14406) 116 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:52.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:52 vm03 bash[20708]: cluster 2026-03-09T20:25:51.455550+0000 mgr.a (mgr.14406) 116 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:52.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:52 vm03 bash[20708]: cluster 2026-03-09T20:25:51.455550+0000 mgr.a (mgr.14406) 116 : cluster [DBG] pgmap v92: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:53.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:53 vm08 bash[23232]: audit 2026-03-09T20:25:53.485230+0000 mon.b (mon.2) 56 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:53.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:53 vm08 bash[23232]: audit 2026-03-09T20:25:53.485230+0000 mon.b (mon.2) 56 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:53.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:53 vm04 bash[22793]: audit 2026-03-09T20:25:53.485230+0000 mon.b (mon.2) 56 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:53.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:53 vm04 bash[22793]: audit 2026-03-09T20:25:53.485230+0000 mon.b (mon.2) 56 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:53.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:53 vm03 bash[20708]: audit 2026-03-09T20:25:53.485230+0000 mon.b (mon.2) 56 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:53.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:53 vm03 bash[20708]: audit 2026-03-09T20:25:53.485230+0000 mon.b (mon.2) 56 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:25:54.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:54 vm08 bash[23232]: cluster 2026-03-09T20:25:53.455859+0000 mgr.a (mgr.14406) 117 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:54.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:54 vm08 bash[23232]: cluster 2026-03-09T20:25:53.455859+0000 mgr.a (mgr.14406) 117 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:54.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:54 vm04 bash[22793]: cluster 2026-03-09T20:25:53.455859+0000 mgr.a (mgr.14406) 117 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:54.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:54 vm04 bash[22793]: cluster 2026-03-09T20:25:53.455859+0000 mgr.a (mgr.14406) 117 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:54 vm03 bash[20708]: cluster 2026-03-09T20:25:53.455859+0000 mgr.a (mgr.14406) 117 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:54 vm03 bash[20708]: cluster 2026-03-09T20:25:53.455859+0000 mgr.a (mgr.14406) 117 : cluster [DBG] pgmap v93: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:56.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:25:55 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:25:55] "GET /metrics HTTP/1.1" 200 21332 "" "Prometheus/2.51.0" 2026-03-09T20:25:56.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:56 vm08 bash[23232]: cluster 2026-03-09T20:25:55.456101+0000 mgr.a (mgr.14406) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:56.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:56 vm08 bash[23232]: cluster 2026-03-09T20:25:55.456101+0000 mgr.a (mgr.14406) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:56.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:56 vm04 bash[22793]: cluster 2026-03-09T20:25:55.456101+0000 mgr.a (mgr.14406) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:56.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:56 vm04 bash[22793]: cluster 2026-03-09T20:25:55.456101+0000 mgr.a (mgr.14406) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:56 vm03 bash[20708]: cluster 2026-03-09T20:25:55.456101+0000 mgr.a (mgr.14406) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:56 vm03 bash[20708]: cluster 2026-03-09T20:25:55.456101+0000 mgr.a (mgr.14406) 118 : cluster [DBG] pgmap v94: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:59.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:58 vm04 bash[22793]: cluster 2026-03-09T20:25:57.456373+0000 mgr.a (mgr.14406) 119 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:59.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:58 vm04 bash[22793]: cluster 2026-03-09T20:25:57.456373+0000 mgr.a (mgr.14406) 119 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:59.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:58 vm03 bash[20708]: cluster 2026-03-09T20:25:57.456373+0000 mgr.a (mgr.14406) 119 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:59.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:58 vm03 bash[20708]: cluster 2026-03-09T20:25:57.456373+0000 mgr.a (mgr.14406) 119 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:59.309 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:58 vm08 bash[23232]: cluster 2026-03-09T20:25:57.456373+0000 mgr.a (mgr.14406) 119 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:25:59.309 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:58 vm08 bash[23232]: cluster 2026-03-09T20:25:57.456373+0000 mgr.a (mgr.14406) 119 : cluster [DBG] pgmap v95: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:00.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:59 vm04 bash[22793]: cluster 2026-03-09T20:25:59.456606+0000 mgr.a (mgr.14406) 120 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:00.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:25:59 vm04 bash[22793]: cluster 2026-03-09T20:25:59.456606+0000 mgr.a (mgr.14406) 120 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:00.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:59 vm03 bash[20708]: cluster 2026-03-09T20:25:59.456606+0000 mgr.a (mgr.14406) 120 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:00.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:25:59 vm03 bash[20708]: cluster 2026-03-09T20:25:59.456606+0000 mgr.a (mgr.14406) 120 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:00.309 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:59 vm08 bash[23232]: cluster 2026-03-09T20:25:59.456606+0000 mgr.a (mgr.14406) 120 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:00.309 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:25:59 vm08 bash[23232]: cluster 2026-03-09T20:25:59.456606+0000 mgr.a (mgr.14406) 120 : cluster [DBG] pgmap v96: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:02.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:02 vm08 bash[23232]: cluster 2026-03-09T20:26:01.456945+0000 mgr.a (mgr.14406) 121 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:02.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:02 vm08 bash[23232]: cluster 2026-03-09T20:26:01.456945+0000 mgr.a (mgr.14406) 121 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:02.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:02 vm04 bash[22793]: cluster 2026-03-09T20:26:01.456945+0000 mgr.a (mgr.14406) 121 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:02.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:02 vm04 bash[22793]: cluster 2026-03-09T20:26:01.456945+0000 mgr.a (mgr.14406) 121 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:02.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:02 vm03 bash[20708]: cluster 2026-03-09T20:26:01.456945+0000 mgr.a (mgr.14406) 121 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:02.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:02 vm03 bash[20708]: cluster 2026-03-09T20:26:01.456945+0000 mgr.a (mgr.14406) 121 : cluster [DBG] pgmap v97: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:04.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:04 vm08 bash[23232]: cluster 2026-03-09T20:26:03.457260+0000 mgr.a (mgr.14406) 122 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:04.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:04 vm08 bash[23232]: cluster 2026-03-09T20:26:03.457260+0000 mgr.a (mgr.14406) 122 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:04.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:04 vm04 bash[22793]: cluster 2026-03-09T20:26:03.457260+0000 mgr.a (mgr.14406) 122 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:04.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:04 vm04 bash[22793]: cluster 2026-03-09T20:26:03.457260+0000 mgr.a (mgr.14406) 122 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:04.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:04 vm03 bash[20708]: cluster 2026-03-09T20:26:03.457260+0000 mgr.a (mgr.14406) 122 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:04.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:04 vm03 bash[20708]: cluster 2026-03-09T20:26:03.457260+0000 mgr.a (mgr.14406) 122 : cluster [DBG] pgmap v98: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:06.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:26:05 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:26:05] "GET /metrics HTTP/1.1" 200 21332 "" "Prometheus/2.51.0" 2026-03-09T20:26:06.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:06 vm08 bash[23232]: cluster 2026-03-09T20:26:05.457555+0000 mgr.a (mgr.14406) 123 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:06.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:06 vm08 bash[23232]: cluster 2026-03-09T20:26:05.457555+0000 mgr.a (mgr.14406) 123 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:06.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:06 vm04 bash[22793]: cluster 2026-03-09T20:26:05.457555+0000 mgr.a (mgr.14406) 123 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:06.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:06 vm04 bash[22793]: cluster 2026-03-09T20:26:05.457555+0000 mgr.a (mgr.14406) 123 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:06 vm03 bash[20708]: cluster 2026-03-09T20:26:05.457555+0000 mgr.a (mgr.14406) 123 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:06 vm03 bash[20708]: cluster 2026-03-09T20:26:05.457555+0000 mgr.a (mgr.14406) 123 : cluster [DBG] pgmap v99: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:07.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:07 vm08 bash[23232]: audit 2026-03-09T20:26:06.915392+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:07.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:07 vm08 bash[23232]: audit 2026-03-09T20:26:06.915392+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:07.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:07 vm08 bash[23232]: audit 2026-03-09T20:26:07.249184+0000 mon.b (mon.2) 58 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:07.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:07 vm08 bash[23232]: audit 2026-03-09T20:26:07.249184+0000 mon.b (mon.2) 58 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:07.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:07 vm08 bash[23232]: audit 2026-03-09T20:26:07.250198+0000 mon.b (mon.2) 59 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:07.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:07 vm08 bash[23232]: audit 2026-03-09T20:26:07.250198+0000 mon.b (mon.2) 59 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:07.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:07 vm08 bash[23232]: audit 2026-03-09T20:26:07.254695+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:07.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:07 vm08 bash[23232]: audit 2026-03-09T20:26:07.254695+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:07.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:07 vm04 bash[22793]: audit 2026-03-09T20:26:06.915392+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:07.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:07 vm04 bash[22793]: audit 2026-03-09T20:26:06.915392+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:07.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:07 vm04 bash[22793]: audit 2026-03-09T20:26:07.249184+0000 mon.b (mon.2) 58 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:07.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:07 vm04 bash[22793]: audit 2026-03-09T20:26:07.249184+0000 mon.b (mon.2) 58 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:07.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:07 vm04 bash[22793]: audit 2026-03-09T20:26:07.250198+0000 mon.b (mon.2) 59 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:07.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:07 vm04 bash[22793]: audit 2026-03-09T20:26:07.250198+0000 mon.b (mon.2) 59 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:07.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:07 vm04 bash[22793]: audit 2026-03-09T20:26:07.254695+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:07.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:07 vm04 bash[22793]: audit 2026-03-09T20:26:07.254695+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:07.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:07 vm03 bash[20708]: audit 2026-03-09T20:26:06.915392+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:07.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:07 vm03 bash[20708]: audit 2026-03-09T20:26:06.915392+0000 mon.b (mon.2) 57 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:07.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:07 vm03 bash[20708]: audit 2026-03-09T20:26:07.249184+0000 mon.b (mon.2) 58 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:07.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:07 vm03 bash[20708]: audit 2026-03-09T20:26:07.249184+0000 mon.b (mon.2) 58 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:07.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:07 vm03 bash[20708]: audit 2026-03-09T20:26:07.250198+0000 mon.b (mon.2) 59 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:07.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:07 vm03 bash[20708]: audit 2026-03-09T20:26:07.250198+0000 mon.b (mon.2) 59 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:07.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:07 vm03 bash[20708]: audit 2026-03-09T20:26:07.254695+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:07.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:07 vm03 bash[20708]: audit 2026-03-09T20:26:07.254695+0000 mon.a (mon.0) 537 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:08.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:08 vm08 bash[23232]: cluster 2026-03-09T20:26:07.457848+0000 mgr.a (mgr.14406) 124 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:08.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:08 vm08 bash[23232]: cluster 2026-03-09T20:26:07.457848+0000 mgr.a (mgr.14406) 124 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:08.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:08 vm08 bash[23232]: audit 2026-03-09T20:26:08.485598+0000 mon.b (mon.2) 60 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:08.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:08 vm08 bash[23232]: audit 2026-03-09T20:26:08.485598+0000 mon.b (mon.2) 60 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:08 vm04 bash[22793]: cluster 2026-03-09T20:26:07.457848+0000 mgr.a (mgr.14406) 124 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:08 vm04 bash[22793]: cluster 2026-03-09T20:26:07.457848+0000 mgr.a (mgr.14406) 124 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:08 vm04 bash[22793]: audit 2026-03-09T20:26:08.485598+0000 mon.b (mon.2) 60 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:08 vm04 bash[22793]: audit 2026-03-09T20:26:08.485598+0000 mon.b (mon.2) 60 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:08 vm03 bash[20708]: cluster 2026-03-09T20:26:07.457848+0000 mgr.a (mgr.14406) 124 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:08 vm03 bash[20708]: cluster 2026-03-09T20:26:07.457848+0000 mgr.a (mgr.14406) 124 : cluster [DBG] pgmap v100: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:08 vm03 bash[20708]: audit 2026-03-09T20:26:08.485598+0000 mon.b (mon.2) 60 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:08 vm03 bash[20708]: audit 2026-03-09T20:26:08.485598+0000 mon.b (mon.2) 60 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:10.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:10 vm08 bash[23232]: cluster 2026-03-09T20:26:09.458075+0000 mgr.a (mgr.14406) 125 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:10.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:10 vm08 bash[23232]: cluster 2026-03-09T20:26:09.458075+0000 mgr.a (mgr.14406) 125 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:10.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:10 vm04 bash[22793]: cluster 2026-03-09T20:26:09.458075+0000 mgr.a (mgr.14406) 125 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:10.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:10 vm04 bash[22793]: cluster 2026-03-09T20:26:09.458075+0000 mgr.a (mgr.14406) 125 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:10.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:10 vm03 bash[20708]: cluster 2026-03-09T20:26:09.458075+0000 mgr.a (mgr.14406) 125 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:10.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:10 vm03 bash[20708]: cluster 2026-03-09T20:26:09.458075+0000 mgr.a (mgr.14406) 125 : cluster [DBG] pgmap v101: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:12.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:12 vm08 bash[23232]: cluster 2026-03-09T20:26:11.458277+0000 mgr.a (mgr.14406) 126 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:12.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:12 vm08 bash[23232]: cluster 2026-03-09T20:26:11.458277+0000 mgr.a (mgr.14406) 126 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:12.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:12 vm04 bash[22793]: cluster 2026-03-09T20:26:11.458277+0000 mgr.a (mgr.14406) 126 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:12.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:12 vm04 bash[22793]: cluster 2026-03-09T20:26:11.458277+0000 mgr.a (mgr.14406) 126 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:12.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:12 vm03 bash[20708]: cluster 2026-03-09T20:26:11.458277+0000 mgr.a (mgr.14406) 126 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:12.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:12 vm03 bash[20708]: cluster 2026-03-09T20:26:11.458277+0000 mgr.a (mgr.14406) 126 : cluster [DBG] pgmap v102: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:14.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:14 vm08 bash[23232]: cluster 2026-03-09T20:26:13.458545+0000 mgr.a (mgr.14406) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:14.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:14 vm08 bash[23232]: cluster 2026-03-09T20:26:13.458545+0000 mgr.a (mgr.14406) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:14.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:14 vm04 bash[22793]: cluster 2026-03-09T20:26:13.458545+0000 mgr.a (mgr.14406) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:14.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:14 vm04 bash[22793]: cluster 2026-03-09T20:26:13.458545+0000 mgr.a (mgr.14406) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:14.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:14 vm03 bash[20708]: cluster 2026-03-09T20:26:13.458545+0000 mgr.a (mgr.14406) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:14.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:14 vm03 bash[20708]: cluster 2026-03-09T20:26:13.458545+0000 mgr.a (mgr.14406) 127 : cluster [DBG] pgmap v103: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:16.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:26:15 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:26:15] "GET /metrics HTTP/1.1" 200 21333 "" "Prometheus/2.51.0" 2026-03-09T20:26:16.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:16 vm08 bash[23232]: cluster 2026-03-09T20:26:15.458821+0000 mgr.a (mgr.14406) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:16.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:16 vm08 bash[23232]: cluster 2026-03-09T20:26:15.458821+0000 mgr.a (mgr.14406) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:16.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:16 vm04 bash[22793]: cluster 2026-03-09T20:26:15.458821+0000 mgr.a (mgr.14406) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:16.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:16 vm04 bash[22793]: cluster 2026-03-09T20:26:15.458821+0000 mgr.a (mgr.14406) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:16.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:16 vm03 bash[20708]: cluster 2026-03-09T20:26:15.458821+0000 mgr.a (mgr.14406) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:16.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:16 vm03 bash[20708]: cluster 2026-03-09T20:26:15.458821+0000 mgr.a (mgr.14406) 128 : cluster [DBG] pgmap v104: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:18.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:18 vm08 bash[23232]: cluster 2026-03-09T20:26:17.459054+0000 mgr.a (mgr.14406) 129 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:18.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:18 vm08 bash[23232]: cluster 2026-03-09T20:26:17.459054+0000 mgr.a (mgr.14406) 129 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:18.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:18 vm04 bash[22793]: cluster 2026-03-09T20:26:17.459054+0000 mgr.a (mgr.14406) 129 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:18.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:18 vm04 bash[22793]: cluster 2026-03-09T20:26:17.459054+0000 mgr.a (mgr.14406) 129 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:18.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:18 vm03 bash[20708]: cluster 2026-03-09T20:26:17.459054+0000 mgr.a (mgr.14406) 129 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:18.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:18 vm03 bash[20708]: cluster 2026-03-09T20:26:17.459054+0000 mgr.a (mgr.14406) 129 : cluster [DBG] pgmap v105: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:20.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:20 vm04 bash[22793]: cluster 2026-03-09T20:26:19.459256+0000 mgr.a (mgr.14406) 130 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:20.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:20 vm04 bash[22793]: cluster 2026-03-09T20:26:19.459256+0000 mgr.a (mgr.14406) 130 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:20.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:20 vm03 bash[20708]: cluster 2026-03-09T20:26:19.459256+0000 mgr.a (mgr.14406) 130 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:20.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:20 vm03 bash[20708]: cluster 2026-03-09T20:26:19.459256+0000 mgr.a (mgr.14406) 130 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:21.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:20 vm08 bash[23232]: cluster 2026-03-09T20:26:19.459256+0000 mgr.a (mgr.14406) 130 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:21.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:20 vm08 bash[23232]: cluster 2026-03-09T20:26:19.459256+0000 mgr.a (mgr.14406) 130 : cluster [DBG] pgmap v106: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:22.093 INFO:teuthology.orchestra.run.vm03.stderr:+ ceph orch ls 2026-03-09T20:26:22.250 INFO:teuthology.orchestra.run.vm03.stdout:NAME PORTS RUNNING REFRESHED AGE PLACEMENT 2026-03-09T20:26:22.251 INFO:teuthology.orchestra.run.vm03.stdout:alertmanager ?:9093,9094 1/1 3m ago 4m count:1 2026-03-09T20:26:22.251 INFO:teuthology.orchestra.run.vm03.stdout:grafana ?:3000 1/1 3m ago 4m count:1 2026-03-09T20:26:22.251 INFO:teuthology.orchestra.run.vm03.stdout:mgr 2/2 3m ago 6m vm03=a;vm04=b;count:2 2026-03-09T20:26:22.251 INFO:teuthology.orchestra.run.vm03.stdout:mon 3/3 3m ago 6m vm03:192.168.123.103=a;vm04:192.168.123.104=b;vm08:192.168.123.108=c;count:3 2026-03-09T20:26:22.251 INFO:teuthology.orchestra.run.vm03.stdout:node-exporter ?:9100 3/3 3m ago 4m * 2026-03-09T20:26:22.251 INFO:teuthology.orchestra.run.vm03.stdout:osd 3 3m ago - 2026-03-09T20:26:22.251 INFO:teuthology.orchestra.run.vm03.stdout:prometheus ?:9095 1/1 3m ago 4m count:1 2026-03-09T20:26:22.261 INFO:teuthology.orchestra.run.vm03.stderr:+ ceph orch ps 2026-03-09T20:26:22.431 INFO:teuthology.orchestra.run.vm03.stdout:NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSION IMAGE ID CONTAINER ID 2026-03-09T20:26:22.432 INFO:teuthology.orchestra.run.vm03.stdout:alertmanager.vm08 vm08 *:9093,9094 running (3m) 3m ago 3m 15.8M - 0.25.0 c8568f914cd2 8362c30d47ab 2026-03-09T20:26:22.432 INFO:teuthology.orchestra.run.vm03.stdout:grafana.vm03 vm03 *:3000 running (3m) 3m ago 3m 49.2M - 10.4.0 c8b91775d855 8e71dd246729 2026-03-09T20:26:22.432 INFO:teuthology.orchestra.run.vm03.stdout:mgr.a vm03 *:9283,8765 running (7m) 3m ago 7m 521M - 19.2.3-678-ge911bdeb 654f31e6858e db18200f4bbf 2026-03-09T20:26:22.432 INFO:teuthology.orchestra.run.vm03.stdout:mgr.b vm04 *:8443,8765 running (6m) 3m ago 6m 463M - 19.2.3-678-ge911bdeb 654f31e6858e cbb52c510633 2026-03-09T20:26:22.432 INFO:teuthology.orchestra.run.vm03.stdout:mon.a vm03 running (7m) 3m ago 7m 45.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e e41cd1ebb9fa 2026-03-09T20:26:22.432 INFO:teuthology.orchestra.run.vm03.stdout:mon.b vm04 running (6m) 3m ago 6m 40.9M 2048M 19.2.3-678-ge911bdeb 654f31e6858e d16c52b65b0d 2026-03-09T20:26:22.432 INFO:teuthology.orchestra.run.vm03.stdout:mon.c vm08 running (6m) 3m ago 6m 41.5M 2048M 19.2.3-678-ge911bdeb 654f31e6858e f734489d9932 2026-03-09T20:26:22.432 INFO:teuthology.orchestra.run.vm03.stdout:node-exporter.vm03 vm03 *:9100 running (3m) 3m ago 4m 2824k - 1.7.0 72c9c2088986 99fd7a3c7726 2026-03-09T20:26:22.432 INFO:teuthology.orchestra.run.vm03.stdout:node-exporter.vm04 vm04 *:9100 running (3m) 3m ago 3m 2736k - 1.7.0 72c9c2088986 67a0da56a367 2026-03-09T20:26:22.432 INFO:teuthology.orchestra.run.vm03.stdout:node-exporter.vm08 vm08 *:9100 running (3m) 3m ago 3m 2811k - 1.7.0 72c9c2088986 6023ee2587cf 2026-03-09T20:26:22.432 INFO:teuthology.orchestra.run.vm03.stdout:osd.0 vm03 running (5m) 3m ago 6m 57.2M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 58b8a0a63c90 2026-03-09T20:26:22.432 INFO:teuthology.orchestra.run.vm03.stdout:osd.1 vm04 running (5m) 3m ago 5m 37.0M 4096M 19.2.3-678-ge911bdeb 654f31e6858e 8c900a9af579 2026-03-09T20:26:22.432 INFO:teuthology.orchestra.run.vm03.stdout:osd.2 vm08 running (4m) 3m ago 4m 56.3M 2503M 19.2.3-678-ge911bdeb 654f31e6858e 2d48ce46cdb5 2026-03-09T20:26:22.432 INFO:teuthology.orchestra.run.vm03.stdout:prometheus.vm04 vm04 *:9095 running (3m) 3m ago 3m 24.4M - 2.51.0 1d3b7f56885b e54922e36da1 2026-03-09T20:26:22.443 INFO:teuthology.orchestra.run.vm03.stderr:+ ceph orch host ls 2026-03-09T20:26:22.604 INFO:teuthology.orchestra.run.vm03.stdout:HOST ADDR LABELS STATUS 2026-03-09T20:26:22.604 INFO:teuthology.orchestra.run.vm03.stdout:vm03 192.168.123.103 2026-03-09T20:26:22.604 INFO:teuthology.orchestra.run.vm03.stdout:vm04 192.168.123.104 2026-03-09T20:26:22.604 INFO:teuthology.orchestra.run.vm03.stdout:vm08 192.168.123.108 2026-03-09T20:26:22.604 INFO:teuthology.orchestra.run.vm03.stdout:3 hosts in cluster 2026-03-09T20:26:22.615 INFO:teuthology.orchestra.run.vm03.stderr:++ ceph orch ps --daemon-type mon -f json 2026-03-09T20:26:22.616 INFO:teuthology.orchestra.run.vm03.stderr:++ jq -r 'last | .daemon_name' 2026-03-09T20:26:22.792 INFO:teuthology.orchestra.run.vm03.stderr:+ MON_DAEMON=mon.c 2026-03-09T20:26:22.792 INFO:teuthology.orchestra.run.vm03.stderr:++ ceph orch ps --daemon-type grafana -f json 2026-03-09T20:26:22.792 INFO:teuthology.orchestra.run.vm03.stderr:++ jq -r .hostname 2026-03-09T20:26:22.793 INFO:teuthology.orchestra.run.vm03.stderr:++ jq -e '.[]' 2026-03-09T20:26:22.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:22 vm04 bash[22793]: cluster 2026-03-09T20:26:21.459467+0000 mgr.a (mgr.14406) 131 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:22.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:22 vm04 bash[22793]: cluster 2026-03-09T20:26:21.459467+0000 mgr.a (mgr.14406) 131 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:22.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:22 vm03 bash[20708]: cluster 2026-03-09T20:26:21.459467+0000 mgr.a (mgr.14406) 131 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:22.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:22 vm03 bash[20708]: cluster 2026-03-09T20:26:21.459467+0000 mgr.a (mgr.14406) 131 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:22.960 INFO:teuthology.orchestra.run.vm03.stderr:+ GRAFANA_HOST=vm03 2026-03-09T20:26:22.960 INFO:teuthology.orchestra.run.vm03.stderr:++ ceph orch ps --daemon-type prometheus -f json 2026-03-09T20:26:22.960 INFO:teuthology.orchestra.run.vm03.stderr:++ jq -r .hostname 2026-03-09T20:26:22.962 INFO:teuthology.orchestra.run.vm03.stderr:++ jq -e '.[]' 2026-03-09T20:26:23.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:22 vm08 bash[23232]: cluster 2026-03-09T20:26:21.459467+0000 mgr.a (mgr.14406) 131 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:23.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:22 vm08 bash[23232]: cluster 2026-03-09T20:26:21.459467+0000 mgr.a (mgr.14406) 131 : cluster [DBG] pgmap v107: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:23.128 INFO:teuthology.orchestra.run.vm03.stderr:+ PROM_HOST=vm04 2026-03-09T20:26:23.128 INFO:teuthology.orchestra.run.vm03.stderr:++ ceph orch ps --daemon-type alertmanager -f json 2026-03-09T20:26:23.128 INFO:teuthology.orchestra.run.vm03.stderr:++ jq -r .hostname 2026-03-09T20:26:23.130 INFO:teuthology.orchestra.run.vm03.stderr:++ jq -e '.[]' 2026-03-09T20:26:23.292 INFO:teuthology.orchestra.run.vm03.stderr:+ ALERTM_HOST=vm08 2026-03-09T20:26:23.293 INFO:teuthology.orchestra.run.vm03.stderr:++ ceph orch host ls -f json 2026-03-09T20:26:23.293 INFO:teuthology.orchestra.run.vm03.stderr:++ jq -r --arg GRAFANA_HOST vm03 '.[] | select(.hostname==$GRAFANA_HOST) | .addr' 2026-03-09T20:26:23.456 INFO:teuthology.orchestra.run.vm03.stderr:+ GRAFANA_IP=192.168.123.103 2026-03-09T20:26:23.456 INFO:teuthology.orchestra.run.vm03.stderr:++ ceph orch host ls -f json 2026-03-09T20:26:23.456 INFO:teuthology.orchestra.run.vm03.stderr:++ jq -r --arg PROM_HOST vm04 '.[] | select(.hostname==$PROM_HOST) | .addr' 2026-03-09T20:26:23.638 INFO:teuthology.orchestra.run.vm03.stderr:+ PROM_IP=192.168.123.104 2026-03-09T20:26:23.638 INFO:teuthology.orchestra.run.vm03.stderr:++ ceph orch host ls -f json 2026-03-09T20:26:23.638 INFO:teuthology.orchestra.run.vm03.stderr:++ jq -r --arg ALERTM_HOST vm08 '.[] | select(.hostname==$ALERTM_HOST) | .addr' 2026-03-09T20:26:23.799 INFO:teuthology.orchestra.run.vm03.stderr:+ ALERTM_IP=192.168.123.108 2026-03-09T20:26:23.800 INFO:teuthology.orchestra.run.vm03.stderr:++ ceph orch host ls -f json 2026-03-09T20:26:23.800 INFO:teuthology.orchestra.run.vm03.stderr:++ jq -r '.[] | .addr' 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:22.248459+0000 mgr.a (mgr.14406) 132 : audit [DBG] from='client.14439 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:22.248459+0000 mgr.a (mgr.14406) 132 : audit [DBG] from='client.14439 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:22.428085+0000 mgr.a (mgr.14406) 133 : audit [DBG] from='client.14445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:22.428085+0000 mgr.a (mgr.14406) 133 : audit [DBG] from='client.14445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:22.603220+0000 mgr.a (mgr.14406) 134 : audit [DBG] from='client.14451 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:22.603220+0000 mgr.a (mgr.14406) 134 : audit [DBG] from='client.14451 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:22.779401+0000 mgr.a (mgr.14406) 135 : audit [DBG] from='client.14457 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:22.779401+0000 mgr.a (mgr.14406) 135 : audit [DBG] from='client.14457 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:22.948957+0000 mgr.a (mgr.14406) 136 : audit [DBG] from='client.14463 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:22.948957+0000 mgr.a (mgr.14406) 136 : audit [DBG] from='client.14463 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:23.115369+0000 mgr.a (mgr.14406) 137 : audit [DBG] from='client.14469 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:23.115369+0000 mgr.a (mgr.14406) 137 : audit [DBG] from='client.14469 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:23.280883+0000 mgr.a (mgr.14406) 138 : audit [DBG] from='client.24307 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:23.280883+0000 mgr.a (mgr.14406) 138 : audit [DBG] from='client.24307 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:23.445858+0000 mgr.a (mgr.14406) 139 : audit [DBG] from='client.14481 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:23.445858+0000 mgr.a (mgr.14406) 139 : audit [DBG] from='client.14481 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: cluster 2026-03-09T20:26:23.459705+0000 mgr.a (mgr.14406) 140 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: cluster 2026-03-09T20:26:23.459705+0000 mgr.a (mgr.14406) 140 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:23.486095+0000 mon.b (mon.2) 61 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:23.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:23 vm04 bash[22793]: audit 2026-03-09T20:26:23.486095+0000 mon.b (mon.2) 61 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:22.248459+0000 mgr.a (mgr.14406) 132 : audit [DBG] from='client.14439 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:22.248459+0000 mgr.a (mgr.14406) 132 : audit [DBG] from='client.14439 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:22.428085+0000 mgr.a (mgr.14406) 133 : audit [DBG] from='client.14445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:22.428085+0000 mgr.a (mgr.14406) 133 : audit [DBG] from='client.14445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:22.603220+0000 mgr.a (mgr.14406) 134 : audit [DBG] from='client.14451 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:22.603220+0000 mgr.a (mgr.14406) 134 : audit [DBG] from='client.14451 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:22.779401+0000 mgr.a (mgr.14406) 135 : audit [DBG] from='client.14457 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:22.779401+0000 mgr.a (mgr.14406) 135 : audit [DBG] from='client.14457 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:22.948957+0000 mgr.a (mgr.14406) 136 : audit [DBG] from='client.14463 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:22.948957+0000 mgr.a (mgr.14406) 136 : audit [DBG] from='client.14463 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:23.115369+0000 mgr.a (mgr.14406) 137 : audit [DBG] from='client.14469 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:23.115369+0000 mgr.a (mgr.14406) 137 : audit [DBG] from='client.14469 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:23.280883+0000 mgr.a (mgr.14406) 138 : audit [DBG] from='client.24307 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:23.280883+0000 mgr.a (mgr.14406) 138 : audit [DBG] from='client.24307 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:23.445858+0000 mgr.a (mgr.14406) 139 : audit [DBG] from='client.14481 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:23.445858+0000 mgr.a (mgr.14406) 139 : audit [DBG] from='client.14481 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: cluster 2026-03-09T20:26:23.459705+0000 mgr.a (mgr.14406) 140 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: cluster 2026-03-09T20:26:23.459705+0000 mgr.a (mgr.14406) 140 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:23.486095+0000 mon.b (mon.2) 61 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:23 vm03 bash[20708]: audit 2026-03-09T20:26:23.486095+0000 mon.b (mon.2) 61 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:23.967 INFO:teuthology.orchestra.run.vm03.stderr:+ ALL_HOST_IPS='192.168.123.103 2026-03-09T20:26:23.967 INFO:teuthology.orchestra.run.vm03.stderr:192.168.123.104 2026-03-09T20:26:23.967 INFO:teuthology.orchestra.run.vm03.stderr:192.168.123.108' 2026-03-09T20:26:23.967 INFO:teuthology.orchestra.run.vm03.stderr:+ for ip in $ALL_HOST_IPS 2026-03-09T20:26:23.967 INFO:teuthology.orchestra.run.vm03.stderr:+ curl -s http://192.168.123.103:9100/metric 2026-03-09T20:26:23.971 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.971 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.971 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.971 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.971 INFO:teuthology.orchestra.run.vm03.stdout: Node Exporter 2026-03-09T20:26:23.971 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout:

Node Exporter

2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout:

Prometheus Node Exporter

2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout:
Version: (version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)
2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout:
    2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout:
  • Metrics
  • 2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stderr:+ for ip in $ALL_HOST_IPS 2026-03-09T20:26:23.972 INFO:teuthology.orchestra.run.vm03.stderr:+ curl -s http://192.168.123.104:9100/metric 2026-03-09T20:26:23.974 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout: Node Exporter 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout:

Node Exporter

2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout:

Prometheus Node Exporter

2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout:
Version: (version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)
2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout:
    2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout:
  • Metrics
  • 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stderr:+ for ip in $ALL_HOST_IPS 2026-03-09T20:26:23.975 INFO:teuthology.orchestra.run.vm03.stderr:+ curl -s http://192.168.123.108:9100/metric 2026-03-09T20:26:23.977 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout: Node Exporter 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout:

Node Exporter

2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout:

Prometheus Node Exporter

2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout:
Version: (version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)
2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout:
    2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout:
  • Metrics
  • 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout:
2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stdout: 2026-03-09T20:26:23.978 INFO:teuthology.orchestra.run.vm03.stderr:+ curl -k -s https://192.168.123.103:3000/api/health 2026-03-09T20:26:23.987 INFO:teuthology.orchestra.run.vm03.stdout:{ 2026-03-09T20:26:23.987 INFO:teuthology.orchestra.run.vm03.stdout: "commit": "03f502a94d17f7dc4e6c34acdf8428aedd986e4c", 2026-03-09T20:26:23.987 INFO:teuthology.orchestra.run.vm03.stdout: "database": "ok", 2026-03-09T20:26:23.987 INFO:teuthology.orchestra.run.vm03.stdout: "version": "10.4.0" 2026-03-09T20:26:23.988 INFO:teuthology.orchestra.run.vm03.stderr:+ curl -k -s https://192.168.123.103:3000/api/health 2026-03-09T20:26:23.988 INFO:teuthology.orchestra.run.vm03.stderr:+ jq -e '.database == "ok"' 2026-03-09T20:26:23.998 INFO:teuthology.orchestra.run.vm03.stdout:}true 2026-03-09T20:26:23.998 INFO:teuthology.orchestra.run.vm03.stderr:+ ceph orch daemon stop mon.c 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:22.248459+0000 mgr.a (mgr.14406) 132 : audit [DBG] from='client.14439 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:22.248459+0000 mgr.a (mgr.14406) 132 : audit [DBG] from='client.14439 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:22.428085+0000 mgr.a (mgr.14406) 133 : audit [DBG] from='client.14445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:22.428085+0000 mgr.a (mgr.14406) 133 : audit [DBG] from='client.14445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:22.603220+0000 mgr.a (mgr.14406) 134 : audit [DBG] from='client.14451 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:22.603220+0000 mgr.a (mgr.14406) 134 : audit [DBG] from='client.14451 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:22.779401+0000 mgr.a (mgr.14406) 135 : audit [DBG] from='client.14457 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:22.779401+0000 mgr.a (mgr.14406) 135 : audit [DBG] from='client.14457 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "mon", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:22.948957+0000 mgr.a (mgr.14406) 136 : audit [DBG] from='client.14463 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:22.948957+0000 mgr.a (mgr.14406) 136 : audit [DBG] from='client.14463 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "grafana", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:23.115369+0000 mgr.a (mgr.14406) 137 : audit [DBG] from='client.14469 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:23.115369+0000 mgr.a (mgr.14406) 137 : audit [DBG] from='client.14469 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "prometheus", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:23.280883+0000 mgr.a (mgr.14406) 138 : audit [DBG] from='client.24307 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:23.280883+0000 mgr.a (mgr.14406) 138 : audit [DBG] from='client.24307 -' entity='client.admin' cmd=[{"prefix": "orch ps", "daemon_type": "alertmanager", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:23.445858+0000 mgr.a (mgr.14406) 139 : audit [DBG] from='client.14481 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:23.445858+0000 mgr.a (mgr.14406) 139 : audit [DBG] from='client.14481 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: cluster 2026-03-09T20:26:23.459705+0000 mgr.a (mgr.14406) 140 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: cluster 2026-03-09T20:26:23.459705+0000 mgr.a (mgr.14406) 140 : cluster [DBG] pgmap v108: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:23.486095+0000 mon.b (mon.2) 61 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:24.059 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:23 vm08 bash[23232]: audit 2026-03-09T20:26:23.486095+0000 mon.b (mon.2) 61 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:24.170 INFO:teuthology.orchestra.run.vm03.stdout:Scheduled to stop mon.c on host 'vm08' 2026-03-09T20:26:24.190 INFO:teuthology.orchestra.run.vm03.stderr:+ sleep 120 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:23.626587+0000 mgr.a (mgr.14406) 141 : audit [DBG] from='client.14487 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:23.626587+0000 mgr.a (mgr.14406) 141 : audit [DBG] from='client.14487 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:23.784534+0000 mgr.a (mgr.14406) 142 : audit [DBG] from='client.14493 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:23.784534+0000 mgr.a (mgr.14406) 142 : audit [DBG] from='client.14493 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:23.957154+0000 mgr.a (mgr.14406) 143 : audit [DBG] from='client.24313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:23.957154+0000 mgr.a (mgr.14406) 143 : audit [DBG] from='client.24313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:24.157674+0000 mgr.a (mgr.14406) 144 : audit [DBG] from='client.24323 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "mon.c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:24.157674+0000 mgr.a (mgr.14406) 144 : audit [DBG] from='client.24323 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "mon.c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: cephadm 2026-03-09T20:26:24.158034+0000 mgr.a (mgr.14406) 145 : cephadm [INF] Schedule stop daemon mon.c 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: cephadm 2026-03-09T20:26:24.158034+0000 mgr.a (mgr.14406) 145 : cephadm [INF] Schedule stop daemon mon.c 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:24.163474+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:24.163474+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:24.168849+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:24.168849+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:24.172124+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:24.172124+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:24.173329+0000 mon.b (mon.2) 63 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:24.173329+0000 mon.b (mon.2) 63 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:24.173831+0000 mon.b (mon.2) 64 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:24.173831+0000 mon.b (mon.2) 64 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:24.175711+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:25 vm08 bash[23232]: audit 2026-03-09T20:26:24.175711+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:23.626587+0000 mgr.a (mgr.14406) 141 : audit [DBG] from='client.14487 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:23.626587+0000 mgr.a (mgr.14406) 141 : audit [DBG] from='client.14487 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:23.784534+0000 mgr.a (mgr.14406) 142 : audit [DBG] from='client.14493 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:23.784534+0000 mgr.a (mgr.14406) 142 : audit [DBG] from='client.14493 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:23.957154+0000 mgr.a (mgr.14406) 143 : audit [DBG] from='client.24313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:23.957154+0000 mgr.a (mgr.14406) 143 : audit [DBG] from='client.24313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:24.157674+0000 mgr.a (mgr.14406) 144 : audit [DBG] from='client.24323 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "mon.c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:24.157674+0000 mgr.a (mgr.14406) 144 : audit [DBG] from='client.24323 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "mon.c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: cephadm 2026-03-09T20:26:24.158034+0000 mgr.a (mgr.14406) 145 : cephadm [INF] Schedule stop daemon mon.c 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: cephadm 2026-03-09T20:26:24.158034+0000 mgr.a (mgr.14406) 145 : cephadm [INF] Schedule stop daemon mon.c 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:24.163474+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:24.163474+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:24.168849+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:24.168849+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:24.172124+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:24.172124+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:24.173329+0000 mon.b (mon.2) 63 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:24.173329+0000 mon.b (mon.2) 63 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:24.173831+0000 mon.b (mon.2) 64 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:24.173831+0000 mon.b (mon.2) 64 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:24.175711+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.616 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:25 vm04 bash[22793]: audit 2026-03-09T20:26:24.175711+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:23.626587+0000 mgr.a (mgr.14406) 141 : audit [DBG] from='client.14487 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:23.626587+0000 mgr.a (mgr.14406) 141 : audit [DBG] from='client.14487 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:23.784534+0000 mgr.a (mgr.14406) 142 : audit [DBG] from='client.14493 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:23.784534+0000 mgr.a (mgr.14406) 142 : audit [DBG] from='client.14493 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:23.957154+0000 mgr.a (mgr.14406) 143 : audit [DBG] from='client.24313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:23.957154+0000 mgr.a (mgr.14406) 143 : audit [DBG] from='client.24313 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json"}]: dispatch 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:24.157674+0000 mgr.a (mgr.14406) 144 : audit [DBG] from='client.24323 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "mon.c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:24.157674+0000 mgr.a (mgr.14406) 144 : audit [DBG] from='client.24323 -' entity='client.admin' cmd=[{"prefix": "orch daemon", "action": "stop", "name": "mon.c", "target": ["mon-mgr", ""]}]: dispatch 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: cephadm 2026-03-09T20:26:24.158034+0000 mgr.a (mgr.14406) 145 : cephadm [INF] Schedule stop daemon mon.c 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: cephadm 2026-03-09T20:26:24.158034+0000 mgr.a (mgr.14406) 145 : cephadm [INF] Schedule stop daemon mon.c 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:24.163474+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:24.163474+0000 mon.a (mon.0) 538 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:24.168849+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:24.168849+0000 mon.a (mon.0) 539 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:24.172124+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:24.172124+0000 mon.b (mon.2) 62 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:24.173329+0000 mon.b (mon.2) 63 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:24.173329+0000 mon.b (mon.2) 63 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:24.173831+0000 mon.b (mon.2) 64 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:24.173831+0000 mon.b (mon.2) 64 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:24.175711+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:25.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20708]: audit 2026-03-09T20:26:24.175711+0000 mon.a (mon.0) 540 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:26.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:26:25 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:26:25] "GET /metrics HTTP/1.1" 200 21333 "" "Prometheus/2.51.0" 2026-03-09T20:26:26.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:26 vm08 bash[23232]: cluster 2026-03-09T20:26:25.459944+0000 mgr.a (mgr.14406) 146 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:26.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:26 vm08 bash[23232]: cluster 2026-03-09T20:26:25.459944+0000 mgr.a (mgr.14406) 146 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:26.615 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:26 vm04 bash[22793]: cluster 2026-03-09T20:26:25.459944+0000 mgr.a (mgr.14406) 146 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:26.615 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:26 vm04 bash[22793]: cluster 2026-03-09T20:26:25.459944+0000 mgr.a (mgr.14406) 146 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:26.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:26 vm03 bash[20708]: cluster 2026-03-09T20:26:25.459944+0000 mgr.a (mgr.14406) 146 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:26.657 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:26 vm03 bash[20708]: cluster 2026-03-09T20:26:25.459944+0000 mgr.a (mgr.14406) 146 : cluster [DBG] pgmap v109: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:28.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:28 vm08 bash[23232]: cluster 2026-03-09T20:26:27.460183+0000 mgr.a (mgr.14406) 147 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:28.809 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:28 vm08 bash[23232]: cluster 2026-03-09T20:26:27.460183+0000 mgr.a (mgr.14406) 147 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:28.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:28 vm04 bash[22793]: cluster 2026-03-09T20:26:27.460183+0000 mgr.a (mgr.14406) 147 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:28.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:28 vm04 bash[22793]: cluster 2026-03-09T20:26:27.460183+0000 mgr.a (mgr.14406) 147 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:28.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:28 vm03 bash[20708]: cluster 2026-03-09T20:26:27.460183+0000 mgr.a (mgr.14406) 147 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:28.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:28 vm03 bash[20708]: cluster 2026-03-09T20:26:27.460183+0000 mgr.a (mgr.14406) 147 : cluster [DBG] pgmap v110: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:29.280 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:28 vm08 systemd[1]: Stopping Ceph mon.c for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d... 2026-03-09T20:26:29.280 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:29 vm08 bash[23232]: debug 2026-03-09T20:26:29.017+0000 7f8c21929640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.c -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T20:26:29.280 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:29 vm08 bash[23232]: debug 2026-03-09T20:26:29.017+0000 7f8c21929640 -1 mon.c@1(peon) e3 *** Got Signal Terminated *** 2026-03-09T20:26:29.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:29 vm08 bash[31211]: ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d-mon-c 2026-03-09T20:26:29.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:29 vm08 systemd[1]: ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.c.service: Deactivated successfully. 2026-03-09T20:26:29.559 INFO:journalctl@ceph.mon.c.vm08.stdout:Mar 09 20:26:29 vm08 systemd[1]: Stopped Ceph mon.c for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d. 2026-03-09T20:26:36.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:26:35 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:26:35] "GET /metrics HTTP/1.1" 200 21333 "" "Prometheus/2.51.0" 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:29.460395+0000 mgr.a (mgr.14406) 148 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:29.460395+0000 mgr.a (mgr.14406) 148 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:31.460617+0000 mgr.a (mgr.14406) 149 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:31.460617+0000 mgr.a (mgr.14406) 149 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:33.460869+0000 mgr.a (mgr.14406) 150 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:33.460869+0000 mgr.a (mgr.14406) 150 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:35.461118+0000 mgr.a (mgr.14406) 151 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:35.461118+0000 mgr.a (mgr.14406) 151 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:37.461349+0000 mgr.a (mgr.14406) 152 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:37.461349+0000 mgr.a (mgr.14406) 152 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: audit 2026-03-09T20:26:38.486264+0000 mon.b (mon.2) 65 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: audit 2026-03-09T20:26:38.486264+0000 mon.b (mon.2) 65 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:38.525418+0000 mon.b (mon.2) 66 : cluster [INF] mon.b calling monitor election 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:38.525418+0000 mon.b (mon.2) 66 : cluster [INF] mon.b calling monitor election 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:38.526125+0000 mon.a (mon.0) 541 : cluster [INF] mon.a calling monitor election 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:38.526125+0000 mon.a (mon.0) 541 : cluster [INF] mon.a calling monitor election 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:39.461581+0000 mgr.a (mgr.14406) 153 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:39.461581+0000 mgr.a (mgr.14406) 153 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:41.461860+0000 mgr.a (mgr.14406) 154 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:41.461860+0000 mgr.a (mgr.14406) 154 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.462134+0000 mgr.a (mgr.14406) 155 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.462134+0000 mgr.a (mgr.14406) 155 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.528767+0000 mon.a (mon.0) 542 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,2) 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.528767+0000 mon.a (mon.0) 542 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,2) 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535184+0000 mon.a (mon.0) 543 : cluster [DBG] monmap epoch 3 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535184+0000 mon.a (mon.0) 543 : cluster [DBG] monmap epoch 3 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535196+0000 mon.a (mon.0) 544 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535196+0000 mon.a (mon.0) 544 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535201+0000 mon.a (mon.0) 545 : cluster [DBG] last_changed 2026-03-09T20:19:39.236940+0000 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535201+0000 mon.a (mon.0) 545 : cluster [DBG] last_changed 2026-03-09T20:19:39.236940+0000 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535207+0000 mon.a (mon.0) 546 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535207+0000 mon.a (mon.0) 546 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535212+0000 mon.a (mon.0) 547 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535212+0000 mon.a (mon.0) 547 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535216+0000 mon.a (mon.0) 548 : cluster [DBG] election_strategy: 1 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535216+0000 mon.a (mon.0) 548 : cluster [DBG] election_strategy: 1 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535356+0000 mon.a (mon.0) 549 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535356+0000 mon.a (mon.0) 549 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535361+0000 mon.a (mon.0) 550 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535361+0000 mon.a (mon.0) 550 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535366+0000 mon.a (mon.0) 551 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535366+0000 mon.a (mon.0) 551 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535772+0000 mon.a (mon.0) 552 : cluster [DBG] fsmap 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535772+0000 mon.a (mon.0) 552 : cluster [DBG] fsmap 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535790+0000 mon.a (mon.0) 553 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.535790+0000 mon.a (mon.0) 553 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.536791+0000 mon.a (mon.0) 554 : cluster [DBG] mgrmap e20: a(active, since 3m), standbys: b 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.536791+0000 mon.a (mon.0) 554 : cluster [DBG] mgrmap e20: a(active, since 3m), standbys: b 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.536902+0000 mon.a (mon.0) 555 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.536902+0000 mon.a (mon.0) 555 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: audit 2026-03-09T20:26:43.560647+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14406 ' entity='' 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: audit 2026-03-09T20:26:43.560647+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14406 ' entity='' 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.560779+0000 mon.a (mon.0) 557 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.560779+0000 mon.a (mon.0) 557 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.560786+0000 mon.a (mon.0) 558 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.560786+0000 mon.a (mon.0) 558 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.560791+0000 mon.a (mon.0) 559 : cluster [WRN] mon.c (rank 1) addr [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] is down (out of quorum) 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: cluster 2026-03-09T20:26:43.560791+0000 mon.a (mon.0) 559 : cluster [WRN] mon.c (rank 1) addr [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] is down (out of quorum) 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: audit 2026-03-09T20:26:43.612468+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: audit 2026-03-09T20:26:43.612468+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: audit 2026-03-09T20:26:43.644021+0000 mon.b (mon.2) 67 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:44.867 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:44 vm04 bash[22793]: audit 2026-03-09T20:26:43.644021+0000 mon.b (mon.2) 67 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:29.460395+0000 mgr.a (mgr.14406) 148 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:29.460395+0000 mgr.a (mgr.14406) 148 : cluster [DBG] pgmap v111: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:31.460617+0000 mgr.a (mgr.14406) 149 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:31.460617+0000 mgr.a (mgr.14406) 149 : cluster [DBG] pgmap v112: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:33.460869+0000 mgr.a (mgr.14406) 150 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:33.460869+0000 mgr.a (mgr.14406) 150 : cluster [DBG] pgmap v113: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:35.461118+0000 mgr.a (mgr.14406) 151 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:35.461118+0000 mgr.a (mgr.14406) 151 : cluster [DBG] pgmap v114: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:37.461349+0000 mgr.a (mgr.14406) 152 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:37.461349+0000 mgr.a (mgr.14406) 152 : cluster [DBG] pgmap v115: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: audit 2026-03-09T20:26:38.486264+0000 mon.b (mon.2) 65 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: audit 2026-03-09T20:26:38.486264+0000 mon.b (mon.2) 65 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:38.525418+0000 mon.b (mon.2) 66 : cluster [INF] mon.b calling monitor election 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:38.525418+0000 mon.b (mon.2) 66 : cluster [INF] mon.b calling monitor election 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:38.526125+0000 mon.a (mon.0) 541 : cluster [INF] mon.a calling monitor election 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:38.526125+0000 mon.a (mon.0) 541 : cluster [INF] mon.a calling monitor election 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:39.461581+0000 mgr.a (mgr.14406) 153 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:39.461581+0000 mgr.a (mgr.14406) 153 : cluster [DBG] pgmap v116: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:41.461860+0000 mgr.a (mgr.14406) 154 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:41.461860+0000 mgr.a (mgr.14406) 154 : cluster [DBG] pgmap v117: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.462134+0000 mgr.a (mgr.14406) 155 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.462134+0000 mgr.a (mgr.14406) 155 : cluster [DBG] pgmap v118: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.528767+0000 mon.a (mon.0) 542 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,2) 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.528767+0000 mon.a (mon.0) 542 : cluster [INF] mon.a is new leader, mons a,b in quorum (ranks 0,2) 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535184+0000 mon.a (mon.0) 543 : cluster [DBG] monmap epoch 3 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535184+0000 mon.a (mon.0) 543 : cluster [DBG] monmap epoch 3 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535196+0000 mon.a (mon.0) 544 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535196+0000 mon.a (mon.0) 544 : cluster [DBG] fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535201+0000 mon.a (mon.0) 545 : cluster [DBG] last_changed 2026-03-09T20:19:39.236940+0000 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535201+0000 mon.a (mon.0) 545 : cluster [DBG] last_changed 2026-03-09T20:19:39.236940+0000 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535207+0000 mon.a (mon.0) 546 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535207+0000 mon.a (mon.0) 546 : cluster [DBG] created 2026-03-09T20:18:30.276494+0000 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535212+0000 mon.a (mon.0) 547 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535212+0000 mon.a (mon.0) 547 : cluster [DBG] min_mon_release 19 (squid) 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535216+0000 mon.a (mon.0) 548 : cluster [DBG] election_strategy: 1 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535216+0000 mon.a (mon.0) 548 : cluster [DBG] election_strategy: 1 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535356+0000 mon.a (mon.0) 549 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535356+0000 mon.a (mon.0) 549 : cluster [DBG] 0: [v2:192.168.123.103:3300/0,v1:192.168.123.103:6789/0] mon.a 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535361+0000 mon.a (mon.0) 550 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535361+0000 mon.a (mon.0) 550 : cluster [DBG] 1: [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] mon.c 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535366+0000 mon.a (mon.0) 551 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535366+0000 mon.a (mon.0) 551 : cluster [DBG] 2: [v2:192.168.123.104:3300/0,v1:192.168.123.104:6789/0] mon.b 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535772+0000 mon.a (mon.0) 552 : cluster [DBG] fsmap 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535772+0000 mon.a (mon.0) 552 : cluster [DBG] fsmap 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535790+0000 mon.a (mon.0) 553 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.535790+0000 mon.a (mon.0) 553 : cluster [DBG] osdmap e22: 3 total, 3 up, 3 in 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.536791+0000 mon.a (mon.0) 554 : cluster [DBG] mgrmap e20: a(active, since 3m), standbys: b 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.536791+0000 mon.a (mon.0) 554 : cluster [DBG] mgrmap e20: a(active, since 3m), standbys: b 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.536902+0000 mon.a (mon.0) 555 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.536902+0000 mon.a (mon.0) 555 : cluster [WRN] Health check failed: 1/3 mons down, quorum a,b (MON_DOWN) 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: audit 2026-03-09T20:26:43.560647+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14406 ' entity='' 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: audit 2026-03-09T20:26:43.560647+0000 mon.a (mon.0) 556 : audit [INF] from='mgr.14406 ' entity='' 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.560779+0000 mon.a (mon.0) 557 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.560779+0000 mon.a (mon.0) 557 : cluster [WRN] Health detail: HEALTH_WARN 1/3 mons down, quorum a,b 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.560786+0000 mon.a (mon.0) 558 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.560786+0000 mon.a (mon.0) 558 : cluster [WRN] [WRN] MON_DOWN: 1/3 mons down, quorum a,b 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.560791+0000 mon.a (mon.0) 559 : cluster [WRN] mon.c (rank 1) addr [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] is down (out of quorum) 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: cluster 2026-03-09T20:26:43.560791+0000 mon.a (mon.0) 559 : cluster [WRN] mon.c (rank 1) addr [v2:192.168.123.108:3300/0,v1:192.168.123.108:6789/0] is down (out of quorum) 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: audit 2026-03-09T20:26:43.612468+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: audit 2026-03-09T20:26:43.612468+0000 mon.a (mon.0) 560 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: audit 2026-03-09T20:26:43.644021+0000 mon.b (mon.2) 67 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:44.908 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:44 vm03 bash[20708]: audit 2026-03-09T20:26:43.644021+0000 mon.b (mon.2) 67 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:26:46.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:26:45 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:26:45] "GET /metrics HTTP/1.1" 200 21331 "" "Prometheus/2.51.0" 2026-03-09T20:26:46.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:46 vm04 bash[22793]: cluster 2026-03-09T20:26:45.462365+0000 mgr.a (mgr.14406) 156 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:46.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:46 vm04 bash[22793]: cluster 2026-03-09T20:26:45.462365+0000 mgr.a (mgr.14406) 156 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:46.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:46 vm03 bash[20708]: cluster 2026-03-09T20:26:45.462365+0000 mgr.a (mgr.14406) 156 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:46.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:46 vm03 bash[20708]: cluster 2026-03-09T20:26:45.462365+0000 mgr.a (mgr.14406) 156 : cluster [DBG] pgmap v119: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:48.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:48 vm04 bash[22793]: cluster 2026-03-09T20:26:47.462628+0000 mgr.a (mgr.14406) 157 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:48.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:48 vm04 bash[22793]: cluster 2026-03-09T20:26:47.462628+0000 mgr.a (mgr.14406) 157 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:48.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:48 vm03 bash[20708]: cluster 2026-03-09T20:26:47.462628+0000 mgr.a (mgr.14406) 157 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:48.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:48 vm03 bash[20708]: cluster 2026-03-09T20:26:47.462628+0000 mgr.a (mgr.14406) 157 : cluster [DBG] pgmap v120: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:50.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:49 vm04 bash[22793]: audit 2026-03-09T20:26:48.693103+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:50.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:49 vm04 bash[22793]: audit 2026-03-09T20:26:48.693103+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:50.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:49 vm04 bash[22793]: audit 2026-03-09T20:26:48.698333+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:50.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:49 vm04 bash[22793]: audit 2026-03-09T20:26:48.698333+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:50.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:49 vm04 bash[22793]: audit 2026-03-09T20:26:48.702549+0000 mon.b (mon.2) 68 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:50.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:49 vm04 bash[22793]: audit 2026-03-09T20:26:48.702549+0000 mon.b (mon.2) 68 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:50.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:49 vm04 bash[22793]: audit 2026-03-09T20:26:48.703534+0000 mon.b (mon.2) 69 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:50.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:49 vm04 bash[22793]: audit 2026-03-09T20:26:48.703534+0000 mon.b (mon.2) 69 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:50.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:49 vm04 bash[22793]: audit 2026-03-09T20:26:48.705792+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:50.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:49 vm04 bash[22793]: audit 2026-03-09T20:26:48.705792+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:50.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:49 vm04 bash[22793]: cluster 2026-03-09T20:26:49.462860+0000 mgr.a (mgr.14406) 158 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:50.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:49 vm04 bash[22793]: cluster 2026-03-09T20:26:49.462860+0000 mgr.a (mgr.14406) 158 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:49 vm03 bash[20708]: audit 2026-03-09T20:26:48.693103+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:49 vm03 bash[20708]: audit 2026-03-09T20:26:48.693103+0000 mon.a (mon.0) 561 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:49 vm03 bash[20708]: audit 2026-03-09T20:26:48.698333+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:49 vm03 bash[20708]: audit 2026-03-09T20:26:48.698333+0000 mon.a (mon.0) 562 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:49 vm03 bash[20708]: audit 2026-03-09T20:26:48.702549+0000 mon.b (mon.2) 68 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:49 vm03 bash[20708]: audit 2026-03-09T20:26:48.702549+0000 mon.b (mon.2) 68 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:26:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:49 vm03 bash[20708]: audit 2026-03-09T20:26:48.703534+0000 mon.b (mon.2) 69 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:49 vm03 bash[20708]: audit 2026-03-09T20:26:48.703534+0000 mon.b (mon.2) 69 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:26:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:49 vm03 bash[20708]: audit 2026-03-09T20:26:48.705792+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:49 vm03 bash[20708]: audit 2026-03-09T20:26:48.705792+0000 mon.a (mon.0) 563 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:49 vm03 bash[20708]: cluster 2026-03-09T20:26:49.462860+0000 mgr.a (mgr.14406) 158 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:49 vm03 bash[20708]: cluster 2026-03-09T20:26:49.462860+0000 mgr.a (mgr.14406) 158 : cluster [DBG] pgmap v121: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:52.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:52 vm04 bash[22793]: cluster 2026-03-09T20:26:51.463073+0000 mgr.a (mgr.14406) 159 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:52.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:52 vm04 bash[22793]: cluster 2026-03-09T20:26:51.463073+0000 mgr.a (mgr.14406) 159 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:52.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:52 vm03 bash[20708]: cluster 2026-03-09T20:26:51.463073+0000 mgr.a (mgr.14406) 159 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:52.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:52 vm03 bash[20708]: cluster 2026-03-09T20:26:51.463073+0000 mgr.a (mgr.14406) 159 : cluster [DBG] pgmap v122: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:54.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:54 vm04 bash[22793]: cluster 2026-03-09T20:26:53.463258+0000 mgr.a (mgr.14406) 160 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:54.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:54 vm04 bash[22793]: cluster 2026-03-09T20:26:53.463258+0000 mgr.a (mgr.14406) 160 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:54.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:54 vm04 bash[22793]: audit 2026-03-09T20:26:53.488480+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:54.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:54 vm04 bash[22793]: audit 2026-03-09T20:26:53.488480+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:54.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:54 vm04 bash[22793]: audit 2026-03-09T20:26:53.492327+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:54.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:54 vm04 bash[22793]: audit 2026-03-09T20:26:53.492327+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:54 vm03 bash[20708]: cluster 2026-03-09T20:26:53.463258+0000 mgr.a (mgr.14406) 160 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:54 vm03 bash[20708]: cluster 2026-03-09T20:26:53.463258+0000 mgr.a (mgr.14406) 160 : cluster [DBG] pgmap v123: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:54 vm03 bash[20708]: audit 2026-03-09T20:26:53.488480+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:54 vm03 bash[20708]: audit 2026-03-09T20:26:53.488480+0000 mon.a (mon.0) 564 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:26:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:54 vm03 bash[20708]: audit 2026-03-09T20:26:53.492327+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:54 vm03 bash[20708]: audit 2026-03-09T20:26:53.492327+0000 mon.b (mon.2) 70 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:26:56.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:26:55 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:26:55] "GET /metrics HTTP/1.1" 200 21394 "" "Prometheus/2.51.0" 2026-03-09T20:26:56.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:56 vm04 bash[22793]: cluster 2026-03-09T20:26:55.463495+0000 mgr.a (mgr.14406) 161 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:56.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:56 vm04 bash[22793]: cluster 2026-03-09T20:26:55.463495+0000 mgr.a (mgr.14406) 161 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:56 vm03 bash[20708]: cluster 2026-03-09T20:26:55.463495+0000 mgr.a (mgr.14406) 161 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:56 vm03 bash[20708]: cluster 2026-03-09T20:26:55.463495+0000 mgr.a (mgr.14406) 161 : cluster [DBG] pgmap v124: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:58.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:58 vm04 bash[22793]: cluster 2026-03-09T20:26:57.463717+0000 mgr.a (mgr.14406) 162 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:58.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:26:58 vm04 bash[22793]: cluster 2026-03-09T20:26:57.463717+0000 mgr.a (mgr.14406) 162 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:58 vm03 bash[20708]: cluster 2026-03-09T20:26:57.463717+0000 mgr.a (mgr.14406) 162 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:26:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:26:58 vm03 bash[20708]: cluster 2026-03-09T20:26:57.463717+0000 mgr.a (mgr.14406) 162 : cluster [DBG] pgmap v125: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:00.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:00 vm04 bash[22793]: cluster 2026-03-09T20:26:59.463933+0000 mgr.a (mgr.14406) 163 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:00.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:00 vm04 bash[22793]: cluster 2026-03-09T20:26:59.463933+0000 mgr.a (mgr.14406) 163 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:00 vm03 bash[20708]: cluster 2026-03-09T20:26:59.463933+0000 mgr.a (mgr.14406) 163 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:00 vm03 bash[20708]: cluster 2026-03-09T20:26:59.463933+0000 mgr.a (mgr.14406) 163 : cluster [DBG] pgmap v126: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:02.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:02 vm04 bash[22793]: cluster 2026-03-09T20:27:01.464160+0000 mgr.a (mgr.14406) 164 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:02.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:02 vm04 bash[22793]: cluster 2026-03-09T20:27:01.464160+0000 mgr.a (mgr.14406) 164 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:02.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:02 vm03 bash[20708]: cluster 2026-03-09T20:27:01.464160+0000 mgr.a (mgr.14406) 164 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:02.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:02 vm03 bash[20708]: cluster 2026-03-09T20:27:01.464160+0000 mgr.a (mgr.14406) 164 : cluster [DBG] pgmap v127: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:03.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:03 vm04 bash[22793]: cluster 2026-03-09T20:27:03.464404+0000 mgr.a (mgr.14406) 165 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:03.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:03 vm04 bash[22793]: cluster 2026-03-09T20:27:03.464404+0000 mgr.a (mgr.14406) 165 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:03.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:03 vm03 bash[20708]: cluster 2026-03-09T20:27:03.464404+0000 mgr.a (mgr.14406) 165 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:03.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:03 vm03 bash[20708]: cluster 2026-03-09T20:27:03.464404+0000 mgr.a (mgr.14406) 165 : cluster [DBG] pgmap v128: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:06.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:27:05 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:27:05] "GET /metrics HTTP/1.1" 200 21394 "" "Prometheus/2.51.0" 2026-03-09T20:27:06.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:06 vm04 bash[22793]: cluster 2026-03-09T20:27:05.464631+0000 mgr.a (mgr.14406) 166 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:06.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:06 vm04 bash[22793]: cluster 2026-03-09T20:27:05.464631+0000 mgr.a (mgr.14406) 166 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:06 vm03 bash[20708]: cluster 2026-03-09T20:27:05.464631+0000 mgr.a (mgr.14406) 166 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:06 vm03 bash[20708]: cluster 2026-03-09T20:27:05.464631+0000 mgr.a (mgr.14406) 166 : cluster [DBG] pgmap v129: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:08.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:08 vm04 bash[22793]: cluster 2026-03-09T20:27:07.464893+0000 mgr.a (mgr.14406) 167 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:08 vm04 bash[22793]: cluster 2026-03-09T20:27:07.464893+0000 mgr.a (mgr.14406) 167 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:08 vm04 bash[22793]: audit 2026-03-09T20:27:08.487345+0000 mon.b (mon.2) 71 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:08.866 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:08 vm04 bash[22793]: audit 2026-03-09T20:27:08.487345+0000 mon.b (mon.2) 71 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:08 vm03 bash[20708]: cluster 2026-03-09T20:27:07.464893+0000 mgr.a (mgr.14406) 167 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:08 vm03 bash[20708]: cluster 2026-03-09T20:27:07.464893+0000 mgr.a (mgr.14406) 167 : cluster [DBG] pgmap v130: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:08 vm03 bash[20708]: audit 2026-03-09T20:27:08.487345+0000 mon.b (mon.2) 71 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:08.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:08 vm03 bash[20708]: audit 2026-03-09T20:27:08.487345+0000 mon.b (mon.2) 71 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:10.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:10 vm04 bash[22793]: cluster 2026-03-09T20:27:09.465137+0000 mgr.a (mgr.14406) 168 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:10.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:10 vm04 bash[22793]: cluster 2026-03-09T20:27:09.465137+0000 mgr.a (mgr.14406) 168 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:10.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:10 vm03 bash[20708]: cluster 2026-03-09T20:27:09.465137+0000 mgr.a (mgr.14406) 168 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:10.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:10 vm03 bash[20708]: cluster 2026-03-09T20:27:09.465137+0000 mgr.a (mgr.14406) 168 : cluster [DBG] pgmap v131: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:12.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:11 vm04 bash[22793]: cluster 2026-03-09T20:27:11.465376+0000 mgr.a (mgr.14406) 169 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:12.156 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:11 vm04 bash[22793]: cluster 2026-03-09T20:27:11.465376+0000 mgr.a (mgr.14406) 169 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:12.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:11 vm03 bash[20708]: cluster 2026-03-09T20:27:11.465376+0000 mgr.a (mgr.14406) 169 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:12.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:11 vm03 bash[20708]: cluster 2026-03-09T20:27:11.465376+0000 mgr.a (mgr.14406) 169 : cluster [DBG] pgmap v132: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:15.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:14 vm04 bash[22793]: cluster 2026-03-09T20:27:13.465642+0000 mgr.a (mgr.14406) 170 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:15.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:14 vm04 bash[22793]: cluster 2026-03-09T20:27:13.465642+0000 mgr.a (mgr.14406) 170 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:15.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:14 vm03 bash[20708]: cluster 2026-03-09T20:27:13.465642+0000 mgr.a (mgr.14406) 170 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:15.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:14 vm03 bash[20708]: cluster 2026-03-09T20:27:13.465642+0000 mgr.a (mgr.14406) 170 : cluster [DBG] pgmap v133: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:16.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:15 vm04 bash[22793]: cluster 2026-03-09T20:27:15.465919+0000 mgr.a (mgr.14406) 171 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:16.116 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:15 vm04 bash[22793]: cluster 2026-03-09T20:27:15.465919+0000 mgr.a (mgr.14406) 171 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:16.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:27:15 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:27:15] "GET /metrics HTTP/1.1" 200 21392 "" "Prometheus/2.51.0" 2026-03-09T20:27:16.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:15 vm03 bash[20708]: cluster 2026-03-09T20:27:15.465919+0000 mgr.a (mgr.14406) 171 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:16.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:15 vm03 bash[20708]: cluster 2026-03-09T20:27:15.465919+0000 mgr.a (mgr.14406) 171 : cluster [DBG] pgmap v134: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:18.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:18 vm04 bash[22793]: cluster 2026-03-09T20:27:17.466213+0000 mgr.a (mgr.14406) 172 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:18.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:18 vm04 bash[22793]: cluster 2026-03-09T20:27:17.466213+0000 mgr.a (mgr.14406) 172 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:18.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:18 vm03 bash[20708]: cluster 2026-03-09T20:27:17.466213+0000 mgr.a (mgr.14406) 172 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:18.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:18 vm03 bash[20708]: cluster 2026-03-09T20:27:17.466213+0000 mgr.a (mgr.14406) 172 : cluster [DBG] pgmap v135: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:20.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:20 vm04 bash[22793]: cluster 2026-03-09T20:27:19.466459+0000 mgr.a (mgr.14406) 173 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:20.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:20 vm04 bash[22793]: cluster 2026-03-09T20:27:19.466459+0000 mgr.a (mgr.14406) 173 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:20.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:20 vm03 bash[20708]: cluster 2026-03-09T20:27:19.466459+0000 mgr.a (mgr.14406) 173 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:20.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:20 vm03 bash[20708]: cluster 2026-03-09T20:27:19.466459+0000 mgr.a (mgr.14406) 173 : cluster [DBG] pgmap v136: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:22.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:22 vm04 bash[22793]: cluster 2026-03-09T20:27:21.466722+0000 mgr.a (mgr.14406) 174 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:22.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:22 vm04 bash[22793]: cluster 2026-03-09T20:27:21.466722+0000 mgr.a (mgr.14406) 174 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:22.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:22 vm03 bash[20708]: cluster 2026-03-09T20:27:21.466722+0000 mgr.a (mgr.14406) 174 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:22.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:22 vm03 bash[20708]: cluster 2026-03-09T20:27:21.466722+0000 mgr.a (mgr.14406) 174 : cluster [DBG] pgmap v137: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:23.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:23 vm04 bash[22793]: audit 2026-03-09T20:27:23.487262+0000 mon.b (mon.2) 72 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:23.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:23 vm04 bash[22793]: audit 2026-03-09T20:27:23.487262+0000 mon.b (mon.2) 72 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:23 vm03 bash[20708]: audit 2026-03-09T20:27:23.487262+0000 mon.b (mon.2) 72 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:23 vm03 bash[20708]: audit 2026-03-09T20:27:23.487262+0000 mon.b (mon.2) 72 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:24.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:24 vm04 bash[22793]: cluster 2026-03-09T20:27:23.466985+0000 mgr.a (mgr.14406) 175 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:24.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:24 vm04 bash[22793]: cluster 2026-03-09T20:27:23.466985+0000 mgr.a (mgr.14406) 175 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:24.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:24 vm03 bash[20708]: cluster 2026-03-09T20:27:23.466985+0000 mgr.a (mgr.14406) 175 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:24.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:24 vm03 bash[20708]: cluster 2026-03-09T20:27:23.466985+0000 mgr.a (mgr.14406) 175 : cluster [DBG] pgmap v138: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:26.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:27:25 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:27:25] "GET /metrics HTTP/1.1" 200 21397 "" "Prometheus/2.51.0" 2026-03-09T20:27:26.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:26 vm04 bash[22793]: cluster 2026-03-09T20:27:25.467235+0000 mgr.a (mgr.14406) 176 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:26.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:26 vm04 bash[22793]: cluster 2026-03-09T20:27:25.467235+0000 mgr.a (mgr.14406) 176 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:26.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:26 vm03 bash[20708]: cluster 2026-03-09T20:27:25.467235+0000 mgr.a (mgr.14406) 176 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:26.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:26 vm03 bash[20708]: cluster 2026-03-09T20:27:25.467235+0000 mgr.a (mgr.14406) 176 : cluster [DBG] pgmap v139: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:28.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:28 vm04 bash[22793]: cluster 2026-03-09T20:27:27.467503+0000 mgr.a (mgr.14406) 177 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:28.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:28 vm04 bash[22793]: cluster 2026-03-09T20:27:27.467503+0000 mgr.a (mgr.14406) 177 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:28.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:28 vm03 bash[20708]: cluster 2026-03-09T20:27:27.467503+0000 mgr.a (mgr.14406) 177 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:28.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:28 vm03 bash[20708]: cluster 2026-03-09T20:27:27.467503+0000 mgr.a (mgr.14406) 177 : cluster [DBG] pgmap v140: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:30.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:30 vm04 bash[22793]: cluster 2026-03-09T20:27:29.467779+0000 mgr.a (mgr.14406) 178 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:30.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:30 vm04 bash[22793]: cluster 2026-03-09T20:27:29.467779+0000 mgr.a (mgr.14406) 178 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:30.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:30 vm03 bash[20708]: cluster 2026-03-09T20:27:29.467779+0000 mgr.a (mgr.14406) 178 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:30.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:30 vm03 bash[20708]: cluster 2026-03-09T20:27:29.467779+0000 mgr.a (mgr.14406) 178 : cluster [DBG] pgmap v141: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:31.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:31 vm04 bash[22793]: cluster 2026-03-09T20:27:31.468000+0000 mgr.a (mgr.14406) 179 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:31.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:31 vm04 bash[22793]: cluster 2026-03-09T20:27:31.468000+0000 mgr.a (mgr.14406) 179 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:31.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:31 vm03 bash[20708]: cluster 2026-03-09T20:27:31.468000+0000 mgr.a (mgr.14406) 179 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:31.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:31 vm03 bash[20708]: cluster 2026-03-09T20:27:31.468000+0000 mgr.a (mgr.14406) 179 : cluster [DBG] pgmap v142: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:34.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:34 vm04 bash[22793]: cluster 2026-03-09T20:27:33.468232+0000 mgr.a (mgr.14406) 180 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:34.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:34 vm04 bash[22793]: cluster 2026-03-09T20:27:33.468232+0000 mgr.a (mgr.14406) 180 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:34.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:34 vm03 bash[20708]: cluster 2026-03-09T20:27:33.468232+0000 mgr.a (mgr.14406) 180 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:34.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:34 vm03 bash[20708]: cluster 2026-03-09T20:27:33.468232+0000 mgr.a (mgr.14406) 180 : cluster [DBG] pgmap v143: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:36.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:27:35 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:27:35] "GET /metrics HTTP/1.1" 200 21397 "" "Prometheus/2.51.0" 2026-03-09T20:27:36.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:36 vm04 bash[22793]: cluster 2026-03-09T20:27:35.468498+0000 mgr.a (mgr.14406) 181 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:36.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:36 vm04 bash[22793]: cluster 2026-03-09T20:27:35.468498+0000 mgr.a (mgr.14406) 181 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:36.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:36 vm03 bash[20708]: cluster 2026-03-09T20:27:35.468498+0000 mgr.a (mgr.14406) 181 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:36.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:36 vm03 bash[20708]: cluster 2026-03-09T20:27:35.468498+0000 mgr.a (mgr.14406) 181 : cluster [DBG] pgmap v144: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:38.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:38 vm04 bash[22793]: cluster 2026-03-09T20:27:37.468757+0000 mgr.a (mgr.14406) 182 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:38.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:38 vm04 bash[22793]: cluster 2026-03-09T20:27:37.468757+0000 mgr.a (mgr.14406) 182 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:38.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:38 vm04 bash[22793]: audit 2026-03-09T20:27:38.487889+0000 mon.b (mon.2) 73 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:38.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:38 vm04 bash[22793]: audit 2026-03-09T20:27:38.487889+0000 mon.b (mon.2) 73 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:38.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:38 vm03 bash[20708]: cluster 2026-03-09T20:27:37.468757+0000 mgr.a (mgr.14406) 182 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:38.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:38 vm03 bash[20708]: cluster 2026-03-09T20:27:37.468757+0000 mgr.a (mgr.14406) 182 : cluster [DBG] pgmap v145: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:38.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:38 vm03 bash[20708]: audit 2026-03-09T20:27:38.487889+0000 mon.b (mon.2) 73 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:38.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:38 vm03 bash[20708]: audit 2026-03-09T20:27:38.487889+0000 mon.b (mon.2) 73 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:40.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:40 vm04 bash[22793]: cluster 2026-03-09T20:27:39.468974+0000 mgr.a (mgr.14406) 183 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:40.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:40 vm04 bash[22793]: cluster 2026-03-09T20:27:39.468974+0000 mgr.a (mgr.14406) 183 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:40.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:40 vm03 bash[20708]: cluster 2026-03-09T20:27:39.468974+0000 mgr.a (mgr.14406) 183 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:40.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:40 vm03 bash[20708]: cluster 2026-03-09T20:27:39.468974+0000 mgr.a (mgr.14406) 183 : cluster [DBG] pgmap v146: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:42.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:42 vm03 bash[20708]: cluster 2026-03-09T20:27:41.469193+0000 mgr.a (mgr.14406) 184 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:42.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:42 vm03 bash[20708]: cluster 2026-03-09T20:27:41.469193+0000 mgr.a (mgr.14406) 184 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:43.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:42 vm04 bash[22793]: cluster 2026-03-09T20:27:41.469193+0000 mgr.a (mgr.14406) 184 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:43.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:42 vm04 bash[22793]: cluster 2026-03-09T20:27:41.469193+0000 mgr.a (mgr.14406) 184 : cluster [DBG] pgmap v147: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:43.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:43 vm03 bash[20708]: cluster 2026-03-09T20:27:43.469470+0000 mgr.a (mgr.14406) 185 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:43.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:43 vm03 bash[20708]: cluster 2026-03-09T20:27:43.469470+0000 mgr.a (mgr.14406) 185 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:44.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:43 vm04 bash[22793]: cluster 2026-03-09T20:27:43.469470+0000 mgr.a (mgr.14406) 185 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:44.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:43 vm04 bash[22793]: cluster 2026-03-09T20:27:43.469470+0000 mgr.a (mgr.14406) 185 : cluster [DBG] pgmap v148: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:46.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:27:45 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:27:45] "GET /metrics HTTP/1.1" 200 21391 "" "Prometheus/2.51.0" 2026-03-09T20:27:46.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:46 vm03 bash[20708]: cluster 2026-03-09T20:27:45.469731+0000 mgr.a (mgr.14406) 186 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:46.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:46 vm03 bash[20708]: cluster 2026-03-09T20:27:45.469731+0000 mgr.a (mgr.14406) 186 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:47.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:46 vm04 bash[22793]: cluster 2026-03-09T20:27:45.469731+0000 mgr.a (mgr.14406) 186 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:47.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:46 vm04 bash[22793]: cluster 2026-03-09T20:27:45.469731+0000 mgr.a (mgr.14406) 186 : cluster [DBG] pgmap v149: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:47.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:47 vm03 bash[20708]: cluster 2026-03-09T20:27:47.469977+0000 mgr.a (mgr.14406) 187 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:47.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:47 vm03 bash[20708]: cluster 2026-03-09T20:27:47.469977+0000 mgr.a (mgr.14406) 187 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:48.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:47 vm04 bash[22793]: cluster 2026-03-09T20:27:47.469977+0000 mgr.a (mgr.14406) 187 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:48.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:47 vm04 bash[22793]: cluster 2026-03-09T20:27:47.469977+0000 mgr.a (mgr.14406) 187 : cluster [DBG] pgmap v150: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:49.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:48 vm04 bash[22793]: audit 2026-03-09T20:27:48.755244+0000 mon.b (mon.2) 74 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:27:49.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:48 vm04 bash[22793]: audit 2026-03-09T20:27:48.755244+0000 mon.b (mon.2) 74 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:27:49.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:48 vm03 bash[20708]: audit 2026-03-09T20:27:48.755244+0000 mon.b (mon.2) 74 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:27:49.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:48 vm03 bash[20708]: audit 2026-03-09T20:27:48.755244+0000 mon.b (mon.2) 74 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch 2026-03-09T20:27:50.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:49 vm04 bash[22793]: audit 2026-03-09T20:27:49.095181+0000 mon.b (mon.2) 75 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:27:50.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:49 vm04 bash[22793]: audit 2026-03-09T20:27:49.095181+0000 mon.b (mon.2) 75 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:27:50.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:49 vm04 bash[22793]: audit 2026-03-09T20:27:49.096116+0000 mon.b (mon.2) 76 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:27:50.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:49 vm04 bash[22793]: audit 2026-03-09T20:27:49.096116+0000 mon.b (mon.2) 76 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:27:50.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:49 vm04 bash[22793]: audit 2026-03-09T20:27:49.099040+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:27:50.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:49 vm04 bash[22793]: audit 2026-03-09T20:27:49.099040+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:27:50.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:49 vm04 bash[22793]: cluster 2026-03-09T20:27:49.470221+0000 mgr.a (mgr.14406) 188 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:50.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:49 vm04 bash[22793]: cluster 2026-03-09T20:27:49.470221+0000 mgr.a (mgr.14406) 188 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:49 vm03 bash[20708]: audit 2026-03-09T20:27:49.095181+0000 mon.b (mon.2) 75 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:27:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:49 vm03 bash[20708]: audit 2026-03-09T20:27:49.095181+0000 mon.b (mon.2) 75 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch 2026-03-09T20:27:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:49 vm03 bash[20708]: audit 2026-03-09T20:27:49.096116+0000 mon.b (mon.2) 76 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:27:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:49 vm03 bash[20708]: audit 2026-03-09T20:27:49.096116+0000 mon.b (mon.2) 76 : audit [INF] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch 2026-03-09T20:27:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:49 vm03 bash[20708]: audit 2026-03-09T20:27:49.099040+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:27:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:49 vm03 bash[20708]: audit 2026-03-09T20:27:49.099040+0000 mon.a (mon.0) 565 : audit [INF] from='mgr.14406 ' entity='mgr.a' 2026-03-09T20:27:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:49 vm03 bash[20708]: cluster 2026-03-09T20:27:49.470221+0000 mgr.a (mgr.14406) 188 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:50.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:49 vm03 bash[20708]: cluster 2026-03-09T20:27:49.470221+0000 mgr.a (mgr.14406) 188 : cluster [DBG] pgmap v151: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:52.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:52 vm04 bash[22793]: cluster 2026-03-09T20:27:51.470444+0000 mgr.a (mgr.14406) 189 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:52.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:52 vm04 bash[22793]: cluster 2026-03-09T20:27:51.470444+0000 mgr.a (mgr.14406) 189 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:52.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:52 vm03 bash[20708]: cluster 2026-03-09T20:27:51.470444+0000 mgr.a (mgr.14406) 189 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:52.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:52 vm03 bash[20708]: cluster 2026-03-09T20:27:51.470444+0000 mgr.a (mgr.14406) 189 : cluster [DBG] pgmap v152: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:53.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:53 vm04 bash[22793]: audit 2026-03-09T20:27:53.487870+0000 mon.b (mon.2) 77 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:53.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:53 vm04 bash[22793]: audit 2026-03-09T20:27:53.487870+0000 mon.b (mon.2) 77 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:53.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:53 vm03 bash[20708]: audit 2026-03-09T20:27:53.487870+0000 mon.b (mon.2) 77 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:53.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:53 vm03 bash[20708]: audit 2026-03-09T20:27:53.487870+0000 mon.b (mon.2) 77 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:27:54.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:54 vm04 bash[22793]: cluster 2026-03-09T20:27:53.470755+0000 mgr.a (mgr.14406) 190 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:54.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:54 vm04 bash[22793]: cluster 2026-03-09T20:27:53.470755+0000 mgr.a (mgr.14406) 190 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:54 vm03 bash[20708]: cluster 2026-03-09T20:27:53.470755+0000 mgr.a (mgr.14406) 190 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:54.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:54 vm03 bash[20708]: cluster 2026-03-09T20:27:53.470755+0000 mgr.a (mgr.14406) 190 : cluster [DBG] pgmap v153: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:56.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:27:55 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:27:55] "GET /metrics HTTP/1.1" 200 21390 "" "Prometheus/2.51.0" 2026-03-09T20:27:56.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:56 vm04 bash[22793]: cluster 2026-03-09T20:27:55.470992+0000 mgr.a (mgr.14406) 191 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:56.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:56 vm04 bash[22793]: cluster 2026-03-09T20:27:55.470992+0000 mgr.a (mgr.14406) 191 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:56 vm03 bash[20708]: cluster 2026-03-09T20:27:55.470992+0000 mgr.a (mgr.14406) 191 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:56.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:56 vm03 bash[20708]: cluster 2026-03-09T20:27:55.470992+0000 mgr.a (mgr.14406) 191 : cluster [DBG] pgmap v154: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:58.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:58 vm04 bash[22793]: cluster 2026-03-09T20:27:57.471242+0000 mgr.a (mgr.14406) 192 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:58.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:27:58 vm04 bash[22793]: cluster 2026-03-09T20:27:57.471242+0000 mgr.a (mgr.14406) 192 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:58 vm03 bash[20708]: cluster 2026-03-09T20:27:57.471242+0000 mgr.a (mgr.14406) 192 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:27:58.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:27:58 vm03 bash[20708]: cluster 2026-03-09T20:27:57.471242+0000 mgr.a (mgr.14406) 192 : cluster [DBG] pgmap v155: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:00.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:00 vm04 bash[22793]: cluster 2026-03-09T20:27:59.471471+0000 mgr.a (mgr.14406) 193 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:00.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:00 vm04 bash[22793]: cluster 2026-03-09T20:27:59.471471+0000 mgr.a (mgr.14406) 193 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:00 vm03 bash[20708]: cluster 2026-03-09T20:27:59.471471+0000 mgr.a (mgr.14406) 193 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:00.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:00 vm03 bash[20708]: cluster 2026-03-09T20:27:59.471471+0000 mgr.a (mgr.14406) 193 : cluster [DBG] pgmap v156: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:02.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:02 vm04 bash[22793]: cluster 2026-03-09T20:28:01.471833+0000 mgr.a (mgr.14406) 194 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:02.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:02 vm04 bash[22793]: cluster 2026-03-09T20:28:01.471833+0000 mgr.a (mgr.14406) 194 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:02.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:02 vm03 bash[20708]: cluster 2026-03-09T20:28:01.471833+0000 mgr.a (mgr.14406) 194 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:02.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:02 vm03 bash[20708]: cluster 2026-03-09T20:28:01.471833+0000 mgr.a (mgr.14406) 194 : cluster [DBG] pgmap v157: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:04.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:04 vm04 bash[22793]: cluster 2026-03-09T20:28:03.472064+0000 mgr.a (mgr.14406) 195 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:04.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:04 vm04 bash[22793]: cluster 2026-03-09T20:28:03.472064+0000 mgr.a (mgr.14406) 195 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:04.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:04 vm03 bash[20708]: cluster 2026-03-09T20:28:03.472064+0000 mgr.a (mgr.14406) 195 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:04.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:04 vm03 bash[20708]: cluster 2026-03-09T20:28:03.472064+0000 mgr.a (mgr.14406) 195 : cluster [DBG] pgmap v158: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:06.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:28:05 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:28:05] "GET /metrics HTTP/1.1" 200 21390 "" "Prometheus/2.51.0" 2026-03-09T20:28:06.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:06 vm04 bash[22793]: cluster 2026-03-09T20:28:05.472308+0000 mgr.a (mgr.14406) 196 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:06.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:06 vm04 bash[22793]: cluster 2026-03-09T20:28:05.472308+0000 mgr.a (mgr.14406) 196 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:06 vm03 bash[20708]: cluster 2026-03-09T20:28:05.472308+0000 mgr.a (mgr.14406) 196 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:06.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:06 vm03 bash[20708]: cluster 2026-03-09T20:28:05.472308+0000 mgr.a (mgr.14406) 196 : cluster [DBG] pgmap v159: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:08.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:07 vm04 bash[22793]: cluster 2026-03-09T20:28:07.472594+0000 mgr.a (mgr.14406) 197 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:08.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:07 vm04 bash[22793]: cluster 2026-03-09T20:28:07.472594+0000 mgr.a (mgr.14406) 197 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:08.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:07 vm03 bash[20708]: cluster 2026-03-09T20:28:07.472594+0000 mgr.a (mgr.14406) 197 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:08.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:07 vm03 bash[20708]: cluster 2026-03-09T20:28:07.472594+0000 mgr.a (mgr.14406) 197 : cluster [DBG] pgmap v160: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:09.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:08 vm04 bash[22793]: audit 2026-03-09T20:28:08.488607+0000 mon.b (mon.2) 78 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:09.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:08 vm04 bash[22793]: audit 2026-03-09T20:28:08.488607+0000 mon.b (mon.2) 78 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:09.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:08 vm03 bash[20708]: audit 2026-03-09T20:28:08.488607+0000 mon.b (mon.2) 78 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:09.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:08 vm03 bash[20708]: audit 2026-03-09T20:28:08.488607+0000 mon.b (mon.2) 78 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:10.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:09 vm04 bash[22793]: cluster 2026-03-09T20:28:09.472859+0000 mgr.a (mgr.14406) 198 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:10.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:09 vm04 bash[22793]: cluster 2026-03-09T20:28:09.472859+0000 mgr.a (mgr.14406) 198 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:10.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:09 vm03 bash[20708]: cluster 2026-03-09T20:28:09.472859+0000 mgr.a (mgr.14406) 198 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:10.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:09 vm03 bash[20708]: cluster 2026-03-09T20:28:09.472859+0000 mgr.a (mgr.14406) 198 : cluster [DBG] pgmap v161: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:12.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:12 vm04 bash[22793]: cluster 2026-03-09T20:28:11.473112+0000 mgr.a (mgr.14406) 199 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:12.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:12 vm04 bash[22793]: cluster 2026-03-09T20:28:11.473112+0000 mgr.a (mgr.14406) 199 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:12.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:12 vm03 bash[20708]: cluster 2026-03-09T20:28:11.473112+0000 mgr.a (mgr.14406) 199 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:12.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:12 vm03 bash[20708]: cluster 2026-03-09T20:28:11.473112+0000 mgr.a (mgr.14406) 199 : cluster [DBG] pgmap v162: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:14.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:13 vm04 bash[22793]: cluster 2026-03-09T20:28:13.473365+0000 mgr.a (mgr.14406) 200 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:14.115 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:13 vm04 bash[22793]: cluster 2026-03-09T20:28:13.473365+0000 mgr.a (mgr.14406) 200 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:14.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:13 vm03 bash[20708]: cluster 2026-03-09T20:28:13.473365+0000 mgr.a (mgr.14406) 200 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:14.157 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:13 vm03 bash[20708]: cluster 2026-03-09T20:28:13.473365+0000 mgr.a (mgr.14406) 200 : cluster [DBG] pgmap v163: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:16.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:28:15 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:28:15] "GET /metrics HTTP/1.1" 200 21393 "" "Prometheus/2.51.0" 2026-03-09T20:28:16.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:16 vm04 bash[22793]: cluster 2026-03-09T20:28:15.473623+0000 mgr.a (mgr.14406) 201 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:16.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:16 vm04 bash[22793]: cluster 2026-03-09T20:28:15.473623+0000 mgr.a (mgr.14406) 201 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:16.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:16 vm03 bash[20708]: cluster 2026-03-09T20:28:15.473623+0000 mgr.a (mgr.14406) 201 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:16.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:16 vm03 bash[20708]: cluster 2026-03-09T20:28:15.473623+0000 mgr.a (mgr.14406) 201 : cluster [DBG] pgmap v164: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:18.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:18 vm04 bash[22793]: cluster 2026-03-09T20:28:17.473849+0000 mgr.a (mgr.14406) 202 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:18.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:18 vm04 bash[22793]: cluster 2026-03-09T20:28:17.473849+0000 mgr.a (mgr.14406) 202 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:18.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:18 vm03 bash[20708]: cluster 2026-03-09T20:28:17.473849+0000 mgr.a (mgr.14406) 202 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:18.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:18 vm03 bash[20708]: cluster 2026-03-09T20:28:17.473849+0000 mgr.a (mgr.14406) 202 : cluster [DBG] pgmap v165: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:20.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:20 vm04 bash[22793]: cluster 2026-03-09T20:28:19.474071+0000 mgr.a (mgr.14406) 203 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:20.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:20 vm04 bash[22793]: cluster 2026-03-09T20:28:19.474071+0000 mgr.a (mgr.14406) 203 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:20.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:20 vm03 bash[20708]: cluster 2026-03-09T20:28:19.474071+0000 mgr.a (mgr.14406) 203 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:20.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:20 vm03 bash[20708]: cluster 2026-03-09T20:28:19.474071+0000 mgr.a (mgr.14406) 203 : cluster [DBG] pgmap v166: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:22.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:22 vm04 bash[22793]: cluster 2026-03-09T20:28:21.474295+0000 mgr.a (mgr.14406) 204 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:22.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:22 vm04 bash[22793]: cluster 2026-03-09T20:28:21.474295+0000 mgr.a (mgr.14406) 204 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:22.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:22 vm03 bash[20708]: cluster 2026-03-09T20:28:21.474295+0000 mgr.a (mgr.14406) 204 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:22.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:22 vm03 bash[20708]: cluster 2026-03-09T20:28:21.474295+0000 mgr.a (mgr.14406) 204 : cluster [DBG] pgmap v167: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:23.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:23 vm04 bash[22793]: audit 2026-03-09T20:28:23.488536+0000 mon.b (mon.2) 79 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:23.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:23 vm04 bash[22793]: audit 2026-03-09T20:28:23.488536+0000 mon.b (mon.2) 79 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:23 vm03 bash[20708]: audit 2026-03-09T20:28:23.488536+0000 mon.b (mon.2) 79 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:23.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:23 vm03 bash[20708]: audit 2026-03-09T20:28:23.488536+0000 mon.b (mon.2) 79 : audit [DBG] from='mgr.14406 192.168.123.103:0/2720205292' entity='mgr.a' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch 2026-03-09T20:28:24.194 INFO:teuthology.orchestra.run.vm03.stderr:+ curl -s http://192.168.123.104:9095/api/v1/status/config 2026-03-09T20:28:24.199 INFO:teuthology.orchestra.run.vm03.stderr:+ curl -s http://192.168.123.104:9095/api/v1/status/config 2026-03-09T20:28:24.199 INFO:teuthology.orchestra.run.vm03.stderr:+ jq -e '.status == "success"' 2026-03-09T20:28:24.200 INFO:teuthology.orchestra.run.vm03.stdout:{"status":"success","data":{"yaml":"global:\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n evaluation_interval: 10s\n external_labels:\n cluster: f72c9476-1bf4-11f1-9f3a-7162c3a72a6d\nalerting:\n alertmanagers:\n - follow_redirects: true\n enable_http2: true\n scheme: http\n timeout: 10s\n api_version: v2\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.103:8765/sd/prometheus/sd-config?service=alertmanager\nrule_files:\n- /etc/prometheus/alerting/*\nscrape_configs:\n- job_name: ceph\n honor_labels: true\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n relabel_configs:\n - source_labels: [__address__]\n separator: ;\n regex: (.*)\n target_label: cluster\n replacement: f72c9476-1bf4-11f1-9f3a-7162c3a72a6d\n action: replace\n - source_labels: [instance]\n separator: ;\n regex: (.*)\n target_label: instance\n replacement: ceph_cluster\n action: replace\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.103:8765/sd/prometheus/sd-config?service=mgr-prometheus\n- job_name: node\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n relabel_configs:\n - source_labels: [__address__]\n separator: ;\n regex: (.*)\n target_label: cluster\n replacement: f72c9476-1bf4-11f1-9f3a-7162c3a72a6d\n action: replace\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.103:8765/sd/prometheus/sd-config?service=node-exporter\n- job_name: ceph-exporter\n honor_labels: true\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n relabel_configs:\n - source_labels: [__address__]\n separator: ;\n regex: (.*)\n target_label: cluster\n replacement: f72c9476-1bf4-11f1-9f3a-7162c3a72a6d\n action: replace\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.103:8765/sd/prometheus/sd-config?service=ceph-exporter\n- job_name: nvmeof\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.103:8765/sd/prometheus/sd-config?service=nvmeof\n- job_name: nfs\n honor_timestamps: true\n track_timestamps_staleness: false\n scrape_interval: 10s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /metrics\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n http_sd_configs:\n - follow_redirects: true\n enable_http2: true\n refresh_interval: 1m\n url: http://192.168.123.103:8765/sd/prometheus/sd-config?service=nfs\n- job_name: federate\n honor_labels: true\n honor_timestamps: true\n track_timestamps_staleness: false\n params:\n match[]:\n - '{job=\"ceph\"}'\n - '{job=\"node\"}'\n - '{job=\"haproxy\"}'\n - '{job=\"ceph-exporter\"}'\n scrape_interval: 15s\n scrape_timeout: 10s\n scrape_protocols:\n - OpenMetricsText1.0.0\n - OpenMetricsText0.0.1\n - PrometheusText0.0.4\n metrics_path: /federate\n scheme: http\n enable_compression: true\n follow_redirects: true\n enable_http2: true\n static_configs:\n - targets: []\n"}}true 2026-03-09T20:28:24.200 INFO:teuthology.orchestra.run.vm03.stderr:+ curl -s http://192.168.123.104:9095/api/v1/alerts 2026-03-09T20:28:24.203 INFO:teuthology.orchestra.run.vm03.stderr:+ curl -s http://192.168.123.104:9095/api/v1/alerts 2026-03-09T20:28:24.203 INFO:teuthology.orchestra.run.vm03.stderr:+ jq -e '.data | .alerts | .[] | select(.labels | .alertname == "CephMonDown") | .state == "firing"' 2026-03-09T20:28:24.206 INFO:teuthology.orchestra.run.vm03.stdout:{"status":"success","data":{"alerts":[{"labels":{"alertname":"CephMonDownQuorumAtRisk","oid":"1.3.6.1.4.1.50495.1.2.1.3.1","severity":"critical","type":"ceph_default"},"annotations":{"description":"Quorum requires a majority of monitors (x 2) to be active. Without quorum the cluster will become inoperable, affecting all services and connected clients. The following monitors are down: - mon.c on vm08","documentation":"https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down","summary":"Monitor quorum is at risk"},"state":"firing","activeAt":"2026-03-09T20:27:02.639590217Z","value":"1e+00"},{"labels":{"alertname":"CephMonDown","severity":"warning","type":"ceph_default"},"annotations":{"description":"You have 1 monitor down. Quorum is still intact, but the loss of an additional monitor will make your cluster inoperable. The following monitors are down: - mon.c on vm08\n","documentation":"https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down","summary":"One or more monitors down"},"state":"firing","activeAt":"2026-03-09T20:27:02.639590217Z","value":"1e+00"},{"labels":{"alertname":"CephHealthWarning","cluster":"f72c9476-1bf4-11f1-9f3a-7162c3a72a6d","instance":"ceph_cluster","job":"ceph","severity":"warning","type":"ceph_default"},"annotations":{"description":"The cluster state has been HEALTH_WARN for more than 15 minutes. Please check 'ceph health detail' for more information.","summary":"Ceph is in the WARNING state"},"state":"pending","activeAt":"2026-03-09T20:27:03.558815712Z","value":"1e+00"}]}}true 2026-03-09T20:28:24.206 INFO:teuthology.orchestra.run.vm03.stderr:+ curl -s http://192.168.123.108:9093/api/v2/status 2026-03-09T20:28:24.209 INFO:teuthology.orchestra.run.vm03.stdout:{"cluster":{"name":"01KKA48W47HBYMH821Y5RC679H","peers":[{"address":"192.168.123.108:9094","name":"01KKA48W47HBYMH821Y5RC679H"}],"status":"ready"},"config":{"original":"global:\n resolve_timeout: 5m\n http_config:\n tls_config:\n insecure_skip_verify: true\n follow_redirects: true\n enable_http2: true\n smtp_hello: localhost\n smtp_require_tls: true\n pagerduty_url: https://events.pagerduty.com/v2/enqueue\n opsgenie_api_url: https://api.opsgenie.com/\n wechat_api_url: https://qyapi.weixin.qq.com/cgi-bin/\n victorops_api_url: https://alert.victorops.com/integrations/generic/20131114/alert/\n telegram_api_url: https://api.telegram.org\n webex_api_url: https://webexapis.com/v1/messages\nroute:\n receiver: default\n continue: false\n routes:\n - receiver: ceph-dashboard\n group_by:\n - alertname\n continue: false\n group_wait: 10s\n group_interval: 10s\n repeat_interval: 1h\nreceivers:\n- name: default\n- name: ceph-dashboard\n webhook_configs:\n - send_resolved: true\n http_config:\n tls_config:\n insecure_skip_verify: true\n follow_redirects: true\n enable_http2: true\n url: https://vm03.local:8443/api/prometheus_receiver\n max_alerts: 0\n - send_resolved: true\n http_config:\n tls_config:\n insecure_skip_verify: true\n follow_redirects: true\n enable_http2: true\n url: https://vm04.local:8443/api/prometheus_receiver\n max_alerts: 0\ntemplates: []\n"},"uptime":"2026-03-09T20:23:00.999Z","versionInfo":{"branch":"HEAD","buildDate":"20221222-14:51:36","buildUser":"root@abe866dd5717","goVersion":"go1.19.4","revision":"258fab7cdd551f2cf251ed0348f0ad7289aee789","version":"0.25.0"}} 2026-03-09T20:28:24.209 INFO:teuthology.orchestra.run.vm03.stderr:+ curl -s http://192.168.123.108:9093/api/v2/alerts 2026-03-09T20:28:24.211 INFO:teuthology.orchestra.run.vm03.stdout:[{"annotations":{"description":"Quorum requires a majority of monitors (x 2) to be active. Without quorum the cluster will become inoperable, affecting all services and connected clients. The following monitors are down: - mon.c on vm08","documentation":"https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down","summary":"Monitor quorum is at risk"},"endsAt":"2026-03-09T20:31:32.639Z","fingerprint":"23b9d39ae02c7ff4","receivers":[{"name":"ceph-dashboard"}],"startsAt":"2026-03-09T20:27:32.639Z","status":{"inhibitedBy":[],"silencedBy":[],"state":"active"},"updatedAt":"2026-03-09T20:27:32.641Z","generatorURL":"http://vm04.local:9095/graph?g0.expr=%28%28ceph_health_detail%7Bname%3D%22MON_DOWN%22%7D+%3D%3D+1%29+%2A+on+%28%29+%28count%28ceph_mon_quorum_status+%3D%3D+1%29+%3D%3D+bool+%28floor%28count%28ceph_mon_metadata%29+%2F+2%29+%2B+1%29%29%29+%3D%3D+1\u0026g0.tab=1","labels":{"alertname":"CephMonDownQuorumAtRisk","cluster":"f72c9476-1bf4-11f1-9f3a-7162c3a72a6d","oid":"1.3.6.1.4.1.50495.1.2.1.3.1","severity":"critical","type":"ceph_default"}},{"annotations":{"description":"You have 1 monitor down. Quorum is still intact, but the loss of an additional monitor will make your cluster inoperable. The following monitors are down: - mon.c on vm08\n","documentation":"https://docs.ceph.com/en/latest/rados/operations/health-checks#mon-down","summary":"One or more monitors down"},"endsAt":"2026-03-09T20:31:32.639Z","fingerprint":"45981000d1c16e2d","receivers":[{"name":"ceph-dashboard"}],"startsAt":"2026-03-09T20:27:32.639Z","status":{"inhibitedBy":[],"silencedBy":[],"state":"active"},"updatedAt":"2026-03-09T20:27:32.642Z","generatorURL":"http://vm04.local:9095/graph?g0.expr=count%28ceph_mon_quorum_status+%3D%3D+0%29+%3C%3D+%28count%28ceph_mon_metadata%29+-+floor%28count%28ceph_mon_metadata%29+%2F+2%29+%2B+1%29\u0026g0.tab=1","labels":{"alertname":"CephMonDown","cluster":"f72c9476-1bf4-11f1-9f3a-7162c3a72a6d","severity":"warning","type":"ceph_default"}}] 2026-03-09T20:28:24.212 INFO:teuthology.orchestra.run.vm03.stderr:+ curl -s http://192.168.123.108:9093/api/v2/alerts 2026-03-09T20:28:24.212 INFO:teuthology.orchestra.run.vm03.stderr:+ jq -e '.[] | select(.labels | .alertname == "CephMonDown") | .status | .state == "active"' 2026-03-09T20:28:24.214 INFO:teuthology.orchestra.run.vm03.stdout:true 2026-03-09T20:28:24.279 DEBUG:teuthology.run_tasks:Unwinding manager cephadm 2026-03-09T20:28:24.281 INFO:tasks.cephadm:Teardown begin 2026-03-09T20:28:24.281 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:28:24.291 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:28:24.298 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:28:24.305 INFO:tasks.cephadm:Disabling cephadm mgr module 2026-03-09T20:28:24.305 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm --image quay.ceph.io/ceph-ci/ceph:e911bdebe5c8faa3800735d1568fcdca65db60df shell -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d -- ceph mgr module disable cephadm 2026-03-09T20:28:24.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:24 vm04 bash[22793]: cluster 2026-03-09T20:28:23.474529+0000 mgr.a (mgr.14406) 205 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:24.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:24 vm04 bash[22793]: cluster 2026-03-09T20:28:23.474529+0000 mgr.a (mgr.14406) 205 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:24.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:24 vm03 bash[20708]: cluster 2026-03-09T20:28:23.474529+0000 mgr.a (mgr.14406) 205 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:24.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:24 vm03 bash[20708]: cluster 2026-03-09T20:28:23.474529+0000 mgr.a (mgr.14406) 205 : cluster [DBG] pgmap v168: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:26.157 INFO:journalctl@ceph.mgr.a.vm03.stdout:Mar 09 20:28:25 vm03 bash[20968]: ::ffff:192.168.123.104 - - [09/Mar/2026:20:28:25] "GET /metrics HTTP/1.1" 200 21393 "" "Prometheus/2.51.0" 2026-03-09T20:28:26.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:26 vm04 bash[22793]: cluster 2026-03-09T20:28:25.474774+0000 mgr.a (mgr.14406) 206 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:26.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:26 vm04 bash[22793]: cluster 2026-03-09T20:28:25.474774+0000 mgr.a (mgr.14406) 206 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:26.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:26 vm03 bash[20708]: cluster 2026-03-09T20:28:25.474774+0000 mgr.a (mgr.14406) 206 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:26.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:26 vm03 bash[20708]: cluster 2026-03-09T20:28:25.474774+0000 mgr.a (mgr.14406) 206 : cluster [DBG] pgmap v169: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:28.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:28 vm04 bash[22793]: cluster 2026-03-09T20:28:27.475050+0000 mgr.a (mgr.14406) 207 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:28.865 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:28 vm04 bash[22793]: cluster 2026-03-09T20:28:27.475050+0000 mgr.a (mgr.14406) 207 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:28.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:28 vm03 bash[20708]: cluster 2026-03-09T20:28:27.475050+0000 mgr.a (mgr.14406) 207 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:28.907 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:28 vm03 bash[20708]: cluster 2026-03-09T20:28:27.475050+0000 mgr.a (mgr.14406) 207 : cluster [DBG] pgmap v170: 1 pgs: 1 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail 2026-03-09T20:28:28.952 INFO:teuthology.orchestra.run.vm03.stderr:Inferring config /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/mon.a/config 2026-03-09T20:28:29.088 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T20:28:29.084+0000 7f0a6c91e640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T20:28:29.088 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T20:28:29.084+0000 7f0a6c91e640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T20:28:29.088 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T20:28:29.084+0000 7f0a6c91e640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T20:28:29.088 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T20:28:29.084+0000 7f0a6c91e640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T20:28:29.088 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T20:28:29.084+0000 7f0a6c91e640 -1 auth: error reading file: /etc/ceph/ceph.keyring: bufferlist::read_file(/etc/ceph/ceph.keyring): read error:(21) Is a directory 2026-03-09T20:28:29.088 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T20:28:29.084+0000 7f0a6c91e640 -1 auth: failed to load /etc/ceph/ceph.keyring: (21) Is a directory 2026-03-09T20:28:29.088 INFO:teuthology.orchestra.run.vm03.stderr:2026-03-09T20:28:29.084+0000 7f0a6c91e640 -1 monclient: keyring not found 2026-03-09T20:28:29.088 INFO:teuthology.orchestra.run.vm03.stderr:[errno 21] error connecting to the cluster 2026-03-09T20:28:29.135 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:28:29.135 INFO:tasks.cephadm:Cleaning up testdir ceph.* files... 2026-03-09T20:28:29.135 DEBUG:teuthology.orchestra.run.vm03:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T20:28:29.138 DEBUG:teuthology.orchestra.run.vm04:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T20:28:29.141 DEBUG:teuthology.orchestra.run.vm08:> rm -f /home/ubuntu/cephtest/seed.ceph.conf /home/ubuntu/cephtest/ceph.pub 2026-03-09T20:28:29.144 INFO:tasks.cephadm:Stopping all daemons... 2026-03-09T20:28:29.144 INFO:tasks.cephadm.mon.a:Stopping mon.a... 2026-03-09T20:28:29.144 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.a 2026-03-09T20:28:29.229 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:29 vm03 systemd[1]: Stopping Ceph mon.a for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d... 2026-03-09T20:28:29.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:29 vm03 bash[20708]: debug 2026-03-09T20:28:29.224+0000 7fc6cb35e640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.a -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T20:28:29.407 INFO:journalctl@ceph.mon.a.vm03.stdout:Mar 09 20:28:29 vm03 bash[20708]: debug 2026-03-09T20:28:29.224+0000 7fc6cb35e640 -1 mon.a@0(leader) e3 *** Got Signal Terminated *** 2026-03-09T20:28:29.538 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.a.service' 2026-03-09T20:28:29.550 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:28:29.550 INFO:tasks.cephadm.mon.a:Stopped mon.a 2026-03-09T20:28:29.550 INFO:tasks.cephadm.mon.c:Stopping mon.b... 2026-03-09T20:28:29.550 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.b 2026-03-09T20:28:29.814 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:29 vm04 systemd[1]: Stopping Ceph mon.b for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d... 2026-03-09T20:28:29.814 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:29 vm04 bash[22793]: debug 2026-03-09T20:28:29.591+0000 7f90b2127640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-mon -n mon.b -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-stderr=true (PID: 1) UID: 0 2026-03-09T20:28:29.814 INFO:journalctl@ceph.mon.b.vm04.stdout:Mar 09 20:28:29 vm04 bash[22793]: debug 2026-03-09T20:28:29.591+0000 7f90b2127640 -1 mon.b@2(peon) e3 *** Got Signal Terminated *** 2026-03-09T20:28:29.867 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.b.service' 2026-03-09T20:28:29.877 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:28:29.877 INFO:tasks.cephadm.mon.c:Stopped mon.b 2026-03-09T20:28:29.877 INFO:tasks.cephadm.mon.c:Stopping mon.c... 2026-03-09T20:28:29.877 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.c 2026-03-09T20:28:29.886 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mon.c.service' 2026-03-09T20:28:29.939 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:28:29.940 INFO:tasks.cephadm.mon.c:Stopped mon.c 2026-03-09T20:28:29.940 INFO:tasks.cephadm.mgr.a:Stopping mgr.a... 2026-03-09T20:28:29.940 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mgr.a 2026-03-09T20:28:30.101 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mgr.a.service' 2026-03-09T20:28:30.112 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:28:30.112 INFO:tasks.cephadm.mgr.a:Stopped mgr.a 2026-03-09T20:28:30.112 INFO:tasks.cephadm.mgr.b:Stopping mgr.b... 2026-03-09T20:28:30.112 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mgr.b 2026-03-09T20:28:30.248 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@mgr.b.service' 2026-03-09T20:28:30.258 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:28:30.258 INFO:tasks.cephadm.mgr.b:Stopped mgr.b 2026-03-09T20:28:30.258 INFO:tasks.cephadm.osd.0:Stopping osd.0... 2026-03-09T20:28:30.258 DEBUG:teuthology.orchestra.run.vm03:> sudo systemctl stop ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@osd.0 2026-03-09T20:28:30.657 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 20:28:30 vm03 systemd[1]: Stopping Ceph osd.0 for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d... 2026-03-09T20:28:30.657 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 20:28:30 vm03 bash[30684]: debug 2026-03-09T20:28:30.296+0000 7f22f6d6e640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T20:28:30.657 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 20:28:30 vm03 bash[30684]: debug 2026-03-09T20:28:30.296+0000 7f22f6d6e640 -1 osd.0 22 *** Got signal Terminated *** 2026-03-09T20:28:30.657 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 20:28:30 vm03 bash[30684]: debug 2026-03-09T20:28:30.296+0000 7f22f6d6e640 -1 osd.0 22 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:28:35.657 INFO:journalctl@ceph.osd.0.vm03.stdout:Mar 09 20:28:35 vm03 bash[38426]: ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d-osd-0 2026-03-09T20:28:35.927 DEBUG:teuthology.orchestra.run.vm03:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@osd.0.service' 2026-03-09T20:28:35.958 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:28:35.958 INFO:tasks.cephadm.osd.0:Stopped osd.0 2026-03-09T20:28:35.958 INFO:tasks.cephadm.osd.1:Stopping osd.1... 2026-03-09T20:28:35.958 DEBUG:teuthology.orchestra.run.vm04:> sudo systemctl stop ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@osd.1 2026-03-09T20:28:36.365 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 20:28:35 vm04 systemd[1]: Stopping Ceph osd.1 for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d... 2026-03-09T20:28:36.365 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 20:28:36 vm04 bash[25763]: debug 2026-03-09T20:28:35.999+0000 7f2ab462c640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.1 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T20:28:36.365 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 20:28:36 vm04 bash[25763]: debug 2026-03-09T20:28:35.999+0000 7f2ab462c640 -1 osd.1 22 *** Got signal Terminated *** 2026-03-09T20:28:36.365 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 20:28:36 vm04 bash[25763]: debug 2026-03-09T20:28:35.999+0000 7f2ab462c640 -1 osd.1 22 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:28:41.365 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 20:28:41 vm04 bash[30802]: ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d-osd-1 2026-03-09T20:28:41.365 INFO:journalctl@ceph.osd.1.vm04.stdout:Mar 09 20:28:41 vm04 bash[30866]: Error response from daemon: No such container: ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d-osd-1 2026-03-09T20:28:41.514 DEBUG:teuthology.orchestra.run.vm04:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@osd.1.service' 2026-03-09T20:28:41.535 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:28:41.535 INFO:tasks.cephadm.osd.1:Stopped osd.1 2026-03-09T20:28:41.535 INFO:tasks.cephadm.osd.2:Stopping osd.2... 2026-03-09T20:28:41.535 DEBUG:teuthology.orchestra.run.vm08:> sudo systemctl stop ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@osd.2 2026-03-09T20:28:41.809 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 09 20:28:41 vm08 systemd[1]: Stopping Ceph osd.2 for f72c9476-1bf4-11f1-9f3a-7162c3a72a6d... 2026-03-09T20:28:41.810 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 09 20:28:41 vm08 bash[26090]: debug 2026-03-09T20:28:41.641+0000 7fec8eeac640 -1 received signal: Terminated from /sbin/docker-init -- /usr/bin/ceph-osd -n osd.2 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-stderr=true --default-log-stderr-prefix=debug (PID: 1) UID: 0 2026-03-09T20:28:41.810 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 09 20:28:41 vm08 bash[26090]: debug 2026-03-09T20:28:41.641+0000 7fec8eeac640 -1 osd.2 22 *** Got signal Terminated *** 2026-03-09T20:28:41.810 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 09 20:28:41 vm08 bash[26090]: debug 2026-03-09T20:28:41.641+0000 7fec8eeac640 -1 osd.2 22 *** Immediate shutdown (osd_fast_shutdown=true) *** 2026-03-09T20:28:47.018 INFO:journalctl@ceph.osd.2.vm08.stdout:Mar 09 20:28:46 vm08 bash[31518]: ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d-osd-2 2026-03-09T20:28:47.054 DEBUG:teuthology.orchestra.run.vm08:> sudo pkill -f 'journalctl -f -n 0 -u ceph-f72c9476-1bf4-11f1-9f3a-7162c3a72a6d@osd.2.service' 2026-03-09T20:28:47.070 DEBUG:teuthology.orchestra.run:got remote process result: None 2026-03-09T20:28:47.070 INFO:tasks.cephadm.osd.2:Stopped osd.2 2026-03-09T20:28:47.070 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d --force --keep-logs 2026-03-09T20:28:47.159 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:28:53.432 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d --force --keep-logs 2026-03-09T20:28:53.526 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:28:59.660 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d --force --keep-logs 2026-03-09T20:28:59.752 INFO:teuthology.orchestra.run.vm08.stdout:Deleting cluster with fsid: f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:29:05.765 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:29:05.772 INFO:teuthology.orchestra.run.vm03.stderr:rm: cannot remove '/etc/ceph/ceph.client.admin.keyring': Is a directory 2026-03-09T20:29:05.773 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:29:05.773 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:29:05.780 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/ceph/ceph.conf /etc/ceph/ceph.client.admin.keyring 2026-03-09T20:29:05.788 INFO:tasks.cephadm:Archiving crash dumps... 2026-03-09T20:29:05.788 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/641/remote/vm03/crash 2026-03-09T20:29:05.788 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/crash -- . 2026-03-09T20:29:05.821 INFO:teuthology.orchestra.run.vm03.stderr:tar: /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/crash: Cannot open: No such file or directory 2026-03-09T20:29:05.821 INFO:teuthology.orchestra.run.vm03.stderr:tar: Error is not recoverable: exiting now 2026-03-09T20:29:05.821 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/641/remote/vm04/crash 2026-03-09T20:29:05.821 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/crash -- . 2026-03-09T20:29:05.829 INFO:teuthology.orchestra.run.vm04.stderr:tar: /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/crash: Cannot open: No such file or directory 2026-03-09T20:29:05.829 INFO:teuthology.orchestra.run.vm04.stderr:tar: Error is not recoverable: exiting now 2026-03-09T20:29:05.830 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/crash to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/641/remote/vm08/crash 2026-03-09T20:29:05.830 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/crash -- . 2026-03-09T20:29:05.836 INFO:teuthology.orchestra.run.vm08.stderr:tar: /var/lib/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/crash: Cannot open: No such file or directory 2026-03-09T20:29:05.836 INFO:teuthology.orchestra.run.vm08.stderr:tar: Error is not recoverable: exiting now 2026-03-09T20:29:05.837 INFO:tasks.cephadm:Checking cluster log for badness... 2026-03-09T20:29:05.837 DEBUG:teuthology.orchestra.run.vm03:> sudo egrep '\[ERR\]|\[WRN\]|\[SEC\]' /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.log | egrep CEPHADM_ | egrep -v '\(MDS_ALL_DOWN\)' | egrep -v '\(MDS_UP_LESS_THAN_MAX\)' | egrep -v MON_DOWN | egrep -v 'mons down' | egrep -v 'mon down' | egrep -v 'out of quorum' | egrep -v CEPHADM_STRAY_DAEMON | egrep -v CEPHADM_FAILED_DAEMON | head -n 1 2026-03-09T20:29:05.878 INFO:tasks.cephadm:Compressing logs... 2026-03-09T20:29:05.878 DEBUG:teuthology.orchestra.run.vm03:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T20:29:05.922 DEBUG:teuthology.orchestra.run.vm04:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T20:29:05.924 DEBUG:teuthology.orchestra.run.vm08:> time sudo find /var/log/ceph /var/log/rbd-target-api -name '*.log' -print0 | sudo xargs --max-args=1 --max-procs=0 --verbose -0 --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T20:29:05.929 INFO:teuthology.orchestra.run.vm03.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T20:29:05.929 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T20:29:05.930 INFO:teuthology.orchestra.run.vm04.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T20:29:05.931 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mgr.a.log 2026-03-09T20:29:05.931 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T20:29:05.932 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.log 2026-03-09T20:29:05.932 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mon.b.log 2026-03-09T20:29:05.933 INFO:teuthology.orchestra.run.vm08.stderr:find: ‘/var/log/rbd-target-api’: No such file or directory 2026-03-09T20:29:05.933 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.log: 87.8% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.log.gz 2026-03-09T20:29:05.933 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-osd.1.log 2026-03-09T20:29:05.934 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/cephadm.log: /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mgr.a.log: gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.log 2026-03-09T20:29:05.934 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mon.b.log: 84.9% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T20:29:05.934 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mgr.b.log 2026-03-09T20:29:05.934 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/cephadm.log 2026-03-09T20:29:05.934 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.log 2026-03-09T20:29:05.935 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/cephadm.log: gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mon.c.log 2026-03-09T20:29:05.935 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.log: 87.8% 88.6% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T20:29:05.935 INFO:teuthology.orchestra.run.vm08.stderr: -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.log.gz 2026-03-09T20:29:05.936 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-osd.2.log 2026-03-09T20:29:05.936 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mon.c.log: gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.audit.log 2026-03-09T20:29:05.940 INFO:teuthology.orchestra.run.vm03.stderr: 92.4% -- replaced with /var/log/ceph/cephadm.log.gz 2026-03-09T20:29:05.941 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mon.a.log 2026-03-09T20:29:05.942 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-osd.2.log: gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-volume.log 2026-03-09T20:29:05.942 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.log: 87.8% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.log.gz 2026-03-09T20:29:05.942 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.audit.log 2026-03-09T20:29:05.942 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.audit.log: 90.0% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.audit.log.gz 2026-03-09T20:29:05.946 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.cephadm.log 2026-03-09T20:29:05.946 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-osd.1.log: gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.audit.log 2026-03-09T20:29:05.950 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mgr.b.log: 91.0% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mgr.b.log.gz 2026-03-09T20:29:05.951 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-volume.log 2026-03-09T20:29:05.951 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.audit.log: 90.1% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.audit.log.gz 2026-03-09T20:29:05.953 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mon.a.log: gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-volume.log 2026-03-09T20:29:05.954 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-volume.log: 95.8% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-volume.log.gz 2026-03-09T20:29:05.954 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.cephadm.log 2026-03-09T20:29:05.954 INFO:teuthology.orchestra.run.vm08.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.cephadm.log: 80.3% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.cephadm.log.gz 2026-03-09T20:29:05.955 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.audit.log: 90.0% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.audit.log.gz 2026-03-09T20:29:05.955 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.cephadm.log 2026-03-09T20:29:05.960 INFO:teuthology.orchestra.run.vm08.stderr: 93.3% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-osd.2.log.gz 2026-03-09T20:29:05.965 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-volume.log: 95.9% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-volume.log.gz 2026-03-09T20:29:05.966 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-osd.0.log 2026-03-09T20:29:05.966 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-volume.log: 95.8% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-volume.log.gz 2026-03-09T20:29:05.967 INFO:teuthology.orchestra.run.vm04.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.cephadm.log: 80.3% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.cephadm.log.gz 2026-03-09T20:29:05.967 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.cephadm.log: 82.4% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph.cephadm.log.gz 2026-03-09T20:29:05.979 INFO:teuthology.orchestra.run.vm04.stderr: 93.5% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-osd.1.log.gz 2026-03-09T20:29:05.992 INFO:teuthology.orchestra.run.vm08.stderr: 93.0% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mon.c.log.gz 2026-03-09T20:29:05.993 INFO:teuthology.orchestra.run.vm08.stderr: 2026-03-09T20:29:05.993 INFO:teuthology.orchestra.run.vm08.stderr:real 0m0.066s 2026-03-09T20:29:05.993 INFO:teuthology.orchestra.run.vm08.stderr:user 0m0.084s 2026-03-09T20:29:05.993 INFO:teuthology.orchestra.run.vm08.stderr:sys 0m0.009s 2026-03-09T20:29:05.997 INFO:teuthology.orchestra.run.vm03.stderr:/var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-osd.0.log: 90.7% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mgr.a.log.gz 2026-03-09T20:29:06.002 INFO:teuthology.orchestra.run.vm03.stderr: 93.3% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-osd.0.log.gz 2026-03-09T20:29:06.015 INFO:teuthology.orchestra.run.vm04.stderr: 92.9% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mon.b.log.gz 2026-03-09T20:29:06.016 INFO:teuthology.orchestra.run.vm04.stderr: 2026-03-09T20:29:06.016 INFO:teuthology.orchestra.run.vm04.stderr:real 0m0.090s 2026-03-09T20:29:06.016 INFO:teuthology.orchestra.run.vm04.stderr:user 0m0.113s 2026-03-09T20:29:06.016 INFO:teuthology.orchestra.run.vm04.stderr:sys 0m0.027s 2026-03-09T20:29:06.140 INFO:teuthology.orchestra.run.vm03.stderr: 91.2% -- replaced with /var/log/ceph/f72c9476-1bf4-11f1-9f3a-7162c3a72a6d/ceph-mon.a.log.gz 2026-03-09T20:29:06.141 INFO:teuthology.orchestra.run.vm03.stderr: 2026-03-09T20:29:06.141 INFO:teuthology.orchestra.run.vm03.stderr:real 0m0.217s 2026-03-09T20:29:06.141 INFO:teuthology.orchestra.run.vm03.stderr:user 0m0.271s 2026-03-09T20:29:06.141 INFO:teuthology.orchestra.run.vm03.stderr:sys 0m0.017s 2026-03-09T20:29:06.141 INFO:tasks.cephadm:Archiving logs... 2026-03-09T20:29:06.141 DEBUG:teuthology.misc:Transferring archived files from vm03:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/641/remote/vm03/log 2026-03-09T20:29:06.142 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T20:29:06.210 DEBUG:teuthology.misc:Transferring archived files from vm04:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/641/remote/vm04/log 2026-03-09T20:29:06.210 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T20:29:06.227 DEBUG:teuthology.misc:Transferring archived files from vm08:/var/log/ceph to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/641/remote/vm08/log 2026-03-09T20:29:06.227 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /var/log/ceph -- . 2026-03-09T20:29:06.239 INFO:tasks.cephadm:Removing cluster... 2026-03-09T20:29:06.239 DEBUG:teuthology.orchestra.run.vm03:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d --force 2026-03-09T20:29:06.339 INFO:teuthology.orchestra.run.vm03.stdout:Deleting cluster with fsid: f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:29:07.568 DEBUG:teuthology.orchestra.run.vm04:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d --force 2026-03-09T20:29:07.660 INFO:teuthology.orchestra.run.vm04.stdout:Deleting cluster with fsid: f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:29:08.905 DEBUG:teuthology.orchestra.run.vm08:> sudo /home/ubuntu/cephtest/cephadm rm-cluster --fsid f72c9476-1bf4-11f1-9f3a-7162c3a72a6d --force 2026-03-09T20:29:08.999 INFO:teuthology.orchestra.run.vm08.stdout:Deleting cluster with fsid: f72c9476-1bf4-11f1-9f3a-7162c3a72a6d 2026-03-09T20:29:10.252 INFO:tasks.cephadm:Removing cephadm ... 2026-03-09T20:29:10.252 DEBUG:teuthology.orchestra.run.vm03:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T20:29:10.256 DEBUG:teuthology.orchestra.run.vm04:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T20:29:10.259 DEBUG:teuthology.orchestra.run.vm08:> rm -rf /home/ubuntu/cephtest/cephadm 2026-03-09T20:29:10.263 INFO:tasks.cephadm:Teardown complete 2026-03-09T20:29:10.263 DEBUG:teuthology.run_tasks:Unwinding manager install 2026-03-09T20:29:10.265 INFO:teuthology.task.install.util:Removing shipped files: /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer... 2026-03-09T20:29:10.265 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T20:29:10.298 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T20:29:10.303 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f -- /home/ubuntu/cephtest/valgrind.supp /usr/bin/daemon-helper /usr/bin/adjust-ulimits /usr/bin/stdin-killer 2026-03-09T20:29:10.319 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T20:29:10.320 DEBUG:teuthology.orchestra.run.vm03:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T20:29:10.325 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T20:29:10.325 DEBUG:teuthology.orchestra.run.vm04:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T20:29:10.330 INFO:teuthology.task.install.deb:Removing packages: ceph, cephadm, ceph-mds, ceph-mgr, ceph-common, ceph-fuse, ceph-test, ceph-volume, radosgw, python3-rados, python3-rgw, python3-cephfs, python3-rbd, libcephfs2, libcephfs-dev, librados2, librbd1, rbd-fuse on Debian system. 2026-03-09T20:29:10.330 DEBUG:teuthology.orchestra.run.vm08:> for d in ceph cephadm ceph-mds ceph-mgr ceph-common ceph-fuse ceph-test ceph-volume radosgw python3-rados python3-rgw python3-cephfs python3-rbd libcephfs2 libcephfs-dev librados2 librbd1 rbd-fuse ; do sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" purge $d || true ; done 2026-03-09T20:29:10.388 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:10.389 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:10.390 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:10.520 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:10.521 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:10.601 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:10.602 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:10.608 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:10.609 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:10.628 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:10.628 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:29:10.629 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T20:29:10.629 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:10.646 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-09T20:29:10.647 INFO:teuthology.orchestra.run.vm08.stdout: ceph* 2026-03-09T20:29:10.819 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:10.820 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:29:10.820 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T20:29:10.820 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:10.836 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:10.836 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:29:10.837 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 sg3-utils sg3-utils-udev 2026-03-09T20:29:10.837 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:10.838 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T20:29:10.839 INFO:teuthology.orchestra.run.vm03.stdout: ceph* 2026-03-09T20:29:10.850 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T20:29:10.850 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T20:29:10.855 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T20:29:10.856 INFO:teuthology.orchestra.run.vm04.stdout: ceph* 2026-03-09T20:29:10.893 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-09T20:29:10.894 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:11.037 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T20:29:11.037 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T20:29:11.042 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T20:29:11.042 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 47.1 kB disk space will be freed. 2026-03-09T20:29:11.079 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-09T20:29:11.080 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:11.082 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118605 files and directories currently installed.) 2026-03-09T20:29:11.084 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:12.063 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:12.071 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:12.097 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:12.107 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:12.171 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:12.207 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:12.256 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:12.257 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:12.293 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:12.293 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:12.355 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:12.356 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:29:12.356 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T20:29:12.356 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:12.363 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T20:29:12.364 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T20:29:12.433 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:12.434 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:12.454 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:12.454 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:29:12.454 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T20:29:12.454 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:12.467 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-09T20:29:12.468 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T20:29:12.530 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T20:29:12.530 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T20:29:12.569 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-09T20:29:12.571 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:12.589 INFO:teuthology.orchestra.run.vm03.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:12.619 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:12.619 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:29:12.619 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T20:29:12.619 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:12.619 INFO:teuthology.orchestra.run.vm03.stdout:Looking for files to backup/remove ... 2026-03-09T20:29:12.621 INFO:teuthology.orchestra.run.vm03.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T20:29:12.624 INFO:teuthology.orchestra.run.vm03.stdout:Removing user `cephadm' ... 2026-03-09T20:29:12.624 INFO:teuthology.orchestra.run.vm03.stdout:Warning: group `nogroup' has no more members. 2026-03-09T20:29:12.626 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T20:29:12.627 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-cephadm* cephadm* 2026-03-09T20:29:12.635 INFO:teuthology.orchestra.run.vm03.stdout:Done. 2026-03-09T20:29:12.656 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T20:29:12.656 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T20:29:12.661 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:12.689 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-09T20:29:12.690 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:12.708 INFO:teuthology.orchestra.run.vm08.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:12.737 INFO:teuthology.orchestra.run.vm08.stdout:Looking for files to backup/remove ... 2026-03-09T20:29:12.738 INFO:teuthology.orchestra.run.vm08.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T20:29:12.740 INFO:teuthology.orchestra.run.vm08.stdout:Removing user `cephadm' ... 2026-03-09T20:29:12.740 INFO:teuthology.orchestra.run.vm08.stdout:Warning: group `nogroup' has no more members. 2026-03-09T20:29:12.750 INFO:teuthology.orchestra.run.vm08.stdout:Done. 2026-03-09T20:29:12.755 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T20:29:12.758 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:12.770 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:12.804 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T20:29:12.804 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 1775 kB disk space will be freed. 2026-03-09T20:29:12.840 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118603 files and directories currently installed.) 2026-03-09T20:29:12.842 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:12.861 INFO:teuthology.orchestra.run.vm04.stdout:Removing cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:12.866 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T20:29:12.868 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:12.891 INFO:teuthology.orchestra.run.vm04.stdout:Looking for files to backup/remove ... 2026-03-09T20:29:12.892 INFO:teuthology.orchestra.run.vm04.stdout:Not backing up/removing `/var/lib/cephadm', it matches ^/var/.*. 2026-03-09T20:29:12.894 INFO:teuthology.orchestra.run.vm04.stdout:Removing user `cephadm' ... 2026-03-09T20:29:12.894 INFO:teuthology.orchestra.run.vm04.stdout:Warning: group `nogroup' has no more members. 2026-03-09T20:29:12.904 INFO:teuthology.orchestra.run.vm04.stdout:Done. 2026-03-09T20:29:12.929 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:13.039 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T20:29:13.041 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for cephadm (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:14.005 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:14.039 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:14.084 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:14.118 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:14.249 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:14.250 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:14.263 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:14.298 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:14.321 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:14.321 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:14.420 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:14.420 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:29:14.421 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T20:29:14.421 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:14.433 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T20:29:14.434 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mds* 2026-03-09T20:29:14.496 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:14.496 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:29:14.496 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T20:29:14.496 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:14.508 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-09T20:29:14.509 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mds* 2026-03-09T20:29:14.511 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:14.511 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:14.623 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T20:29:14.623 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T20:29:14.662 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T20:29:14.664 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:14.690 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T20:29:14.690 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T20:29:14.717 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:14.718 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mon kpartx libboost-iostreams1.74.0 libboost-thread1.74.0 libpmemobj1 2026-03-09T20:29:14.718 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 python-asyncssh-doc python3-asyncssh sg3-utils sg3-utils-udev 2026-03-09T20:29:14.718 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:14.729 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T20:29:14.730 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mds* 2026-03-09T20:29:14.731 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T20:29:14.733 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:14.892 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T20:29:14.892 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 7437 kB disk space will be freed. 2026-03-09T20:29:14.927 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118529 files and directories currently installed.) 2026-03-09T20:29:14.929 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:15.084 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:15.141 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:15.176 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T20:29:15.178 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:15.223 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T20:29:15.225 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:15.354 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:15.468 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T20:29:15.470 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-mds (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:16.784 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:16.822 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:16.827 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:16.861 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:17.021 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:17.021 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:17.071 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:17.072 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:17.105 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:17.141 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:17.184 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:17.184 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-09T20:29:17.184 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-09T20:29:17.184 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-09T20:29:17.184 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:17.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:17.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:17.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:17.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-09T20:29:17.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T20:29:17.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T20:29:17.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T20:29:17.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-09T20:29:17.185 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:17.185 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev 2026-03-09T20:29:17.185 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:17.196 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T20:29:17.196 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T20:29:17.197 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-k8sevents* 2026-03-09T20:29:17.262 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:17.262 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-09T20:29:17.263 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-09T20:29:17.263 INFO:teuthology.orchestra.run.vm08.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-09T20:29:17.263 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:17.263 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:17.263 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:17.263 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:17.263 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-09T20:29:17.263 INFO:teuthology.orchestra.run.vm08.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T20:29:17.263 INFO:teuthology.orchestra.run.vm08.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T20:29:17.263 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T20:29:17.263 INFO:teuthology.orchestra.run.vm08.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-09T20:29:17.263 INFO:teuthology.orchestra.run.vm08.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:17.263 INFO:teuthology.orchestra.run.vm08.stdout: sg3-utils-udev 2026-03-09T20:29:17.263 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:17.273 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-09T20:29:17.273 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T20:29:17.274 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-k8sevents* 2026-03-09T20:29:17.351 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:17.352 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:17.375 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T20:29:17.376 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T20:29:17.412 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T20:29:17.414 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:17.425 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:17.453 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:17.457 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T20:29:17.457 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T20:29:17.493 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:17.499 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T20:29:17.502 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:17.513 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:17.539 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:17.554 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:17.554 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core ceph-mon kpartx libboost-iostreams1.74.0 2026-03-09T20:29:17.555 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libpmemobj1 libsgutils2-2 python-asyncssh-doc 2026-03-09T20:29:17.555 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools python3-cheroot 2026-03-09T20:29:17.555 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:17.555 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:17.555 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:17.555 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:17.555 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-psutil python3-pyinotify 2026-03-09T20:29:17.555 INFO:teuthology.orchestra.run.vm04.stdout: python3-repoze.lru python3-requests-oauthlib python3-routes python3-rsa 2026-03-09T20:29:17.555 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplegeneric python3-simplejson python3-singledispatch 2026-03-09T20:29:17.555 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn python3-sklearn-lib python3-tempita python3-tempora 2026-03-09T20:29:17.555 INFO:teuthology.orchestra.run.vm04.stdout: python3-threadpoolctl python3-waitress python3-webob python3-websocket 2026-03-09T20:29:17.555 INFO:teuthology.orchestra.run.vm04.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:17.555 INFO:teuthology.orchestra.run.vm04.stdout: sg3-utils-udev 2026-03-09T20:29:17.555 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:17.567 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T20:29:17.567 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr* ceph-mgr-dashboard* ceph-mgr-diskprediction-local* 2026-03-09T20:29:17.567 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-k8sevents* 2026-03-09T20:29:17.578 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:17.738 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 4 to remove and 10 not upgraded. 2026-03-09T20:29:17.738 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 165 MB disk space will be freed. 2026-03-09T20:29:17.779 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 118521 files and directories currently installed.) 2026-03-09T20:29:17.781 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-k8sevents (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:17.792 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-diskprediction-local (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:17.824 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-dashboard (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:17.864 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:17.964 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T20:29:17.966 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:18.029 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T20:29:18.031 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:18.365 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T20:29:18.368 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-mgr (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:19.511 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:19.544 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:19.583 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:19.617 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:19.753 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:19.753 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:19.827 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:19.827 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:19.893 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:19.926 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:19.926 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:19.927 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:19.928 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:19.929 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:19.942 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T20:29:19.943 INFO:teuthology.orchestra.run.vm03.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T20:29:20.023 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:20.023 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:20.023 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:20.024 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:20.037 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-09T20:29:20.038 INFO:teuthology.orchestra.run.vm08.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T20:29:20.126 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T20:29:20.126 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T20:29:20.139 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:20.140 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:20.161 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T20:29:20.162 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:20.224 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:20.232 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T20:29:20.232 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T20:29:20.266 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T20:29:20.268 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:20.306 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:20.306 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:20.306 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:20.307 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:20.319 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T20:29:20.320 INFO:teuthology.orchestra.run.vm04.stdout: ceph-base* ceph-common* ceph-mon* ceph-osd* ceph-test* ceph-volume* radosgw* 2026-03-09T20:29:20.326 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:20.493 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T20:29:20.494 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 472 MB disk space will be freed. 2026-03-09T20:29:20.531 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117937 files and directories currently installed.) 2026-03-09T20:29:20.533 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-volume (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:20.591 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:20.681 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:20.782 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:21.047 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:21.128 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:21.211 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:21.443 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:21.544 INFO:teuthology.orchestra.run.vm03.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:21.629 INFO:teuthology.orchestra.run.vm08.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:21.865 INFO:teuthology.orchestra.run.vm04.stdout:Removing radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:21.903 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:21.940 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:22.050 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:22.090 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:22.253 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-test (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:22.292 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:22.366 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:22.401 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:29:22.473 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-09T20:29:22.476 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:22.500 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:22.536 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:29:22.601 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-09T20:29:22.602 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:22.758 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:22.792 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:29:22.875 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117455 files and directories currently installed.) 2026-03-09T20:29:22.878 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for radosgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:23.091 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:23.219 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:23.451 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-mon (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:23.505 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:23.640 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:23.873 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-base (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:23.959 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:24.037 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:24.277 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:24.400 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:24.456 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:24.698 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-osd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:25.766 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:25.803 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:25.816 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:25.850 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:26.016 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:26.016 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:26.056 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:26.057 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:26.119 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:26.119 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:26.119 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:26.119 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:26.119 INFO:teuthology.orchestra.run.vm08.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:26.119 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:26.119 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:26.119 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:26.119 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:26.119 INFO:teuthology.orchestra.run.vm08.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:26.120 INFO:teuthology.orchestra.run.vm08.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:26.120 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:26.120 INFO:teuthology.orchestra.run.vm08.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:26.120 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:26.120 INFO:teuthology.orchestra.run.vm08.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:26.120 INFO:teuthology.orchestra.run.vm08.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:26.120 INFO:teuthology.orchestra.run.vm08.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:26.120 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:26.127 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-09T20:29:26.127 INFO:teuthology.orchestra.run.vm08.stdout: ceph-fuse* 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:26.173 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:26.180 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T20:29:26.181 INFO:teuthology.orchestra.run.vm03.stdout: ceph-fuse* 2026-03-09T20:29:26.235 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:26.268 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:26.285 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T20:29:26.285 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T20:29:26.319 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-09T20:29:26.320 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:26.353 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T20:29:26.354 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T20:29:26.387 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-09T20:29:26.388 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:26.454 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:26.455 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:26.594 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:26.594 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:26.594 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:26.595 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:26.595 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:26.595 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:26.595 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:26.595 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:26.595 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:26.595 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:26.595 INFO:teuthology.orchestra.run.vm04.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:26.595 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:26.595 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:26.595 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:26.595 INFO:teuthology.orchestra.run.vm04.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:26.595 INFO:teuthology.orchestra.run.vm04.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:26.596 INFO:teuthology.orchestra.run.vm04.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:26.596 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:26.609 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T20:29:26.610 INFO:teuthology.orchestra.run.vm04.stdout: ceph-fuse* 2026-03-09T20:29:26.729 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:26.779 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:26.787 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T20:29:26.787 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 3673 kB disk space will be freed. 2026-03-09T20:29:26.823 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117443 files and directories currently installed.) 2026-03-09T20:29:26.824 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T20:29:26.825 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:26.826 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:26.871 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T20:29:26.874 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:27.252 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:27.353 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T20:29:27.356 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for ceph-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:28.182 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:28.217 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:28.279 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:28.315 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:28.406 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:28.406 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:28.509 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:28.510 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:28.598 INFO:teuthology.orchestra.run.vm03.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T20:29:28.598 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:28.598 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:28.599 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:28.599 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:28.599 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:28.599 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:28.599 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:28.599 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:28.599 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:28.599 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:28.599 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:28.599 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:28.600 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:28.600 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:28.600 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:28.600 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:28.600 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:28.600 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:28.625 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:28.626 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:28.660 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:28.711 INFO:teuthology.orchestra.run.vm08.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T20:29:28.711 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:28.711 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:28.711 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:28.712 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:28.735 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:28.735 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:28.769 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:28.819 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:28.852 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:28.881 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:28.881 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:28.984 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:28.985 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:29.012 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:29.013 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:29.061 INFO:teuthology.orchestra.run.vm03.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T20:29:29.061 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:29.061 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:29.061 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:29.062 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:29.062 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:29.062 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:29.062 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:29.062 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:29.062 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:29.062 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:29.062 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:29.062 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:29.062 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:29.062 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:29.063 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:29.063 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:29.063 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:29.063 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:29.094 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:29.094 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:29.128 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:29.195 INFO:teuthology.orchestra.run.vm08.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T20:29:29.195 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:29.195 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:29.195 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:29.196 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:29.223 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:29.223 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:29.253 INFO:teuthology.orchestra.run.vm04.stdout:Package 'ceph-test' is not installed, so not removed 2026-03-09T20:29:29.253 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:29.254 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:29.254 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:29.255 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:29.259 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:29.277 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:29.277 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:29.313 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:29.347 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:29.348 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:29.434 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:29.435 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:29.515 INFO:teuthology.orchestra.run.vm03.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T20:29:29.515 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:29.515 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:29.515 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:29.516 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:29.539 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:29.540 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:29.541 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:29.541 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:29.577 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:29.628 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:29.642 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:29.642 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:29.676 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:29.722 INFO:teuthology.orchestra.run.vm04.stdout:Package 'ceph-volume' is not installed, so not removed 2026-03-09T20:29:29.722 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:29.722 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:29.722 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:29.723 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:29.748 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:29.748 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:29.781 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:29.803 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:29.803 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:29.890 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:29.891 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:29.957 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:29.957 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:29.957 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:29.957 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:29.957 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-09T20:29:29.958 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:29.972 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:29.973 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:29.975 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T20:29:29.975 INFO:teuthology.orchestra.run.vm03.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T20:29:30.083 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:30.083 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:30.083 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:30.083 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet zip 2026-03-09T20:29:30.084 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:30.099 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-09T20:29:30.099 INFO:teuthology.orchestra.run.vm08.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T20:29:30.170 INFO:teuthology.orchestra.run.vm04.stdout:Package 'radosgw' is not installed, so not removed 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liboath0 libonig5 libpmemobj1 libradosstriper1 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: libsgutils2-2 libsqlite3-mod-ceph nvme-cli python-asyncssh-doc 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: python-pastedeploy-tpl python3-asyncssh python3-cachetools 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: python3-ceph-common python3-cheroot python3-cherrypy3 python3-google-auth 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.classes python3-jaraco.collections python3-jaraco.functools 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.text python3-joblib python3-kubernetes python3-logutils 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: python3-mako python3-natsort python3-paste python3-pastedeploy 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: python3-pastescript python3-pecan python3-portend python3-prettytable 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: python3-psutil python3-pyinotify python3-repoze.lru 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: python3-requests-oauthlib python3-routes python3-rsa python3-simplegeneric 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: python3-simplejson python3-singledispatch python3-sklearn 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: python3-sklearn-lib python3-tempita python3-tempora python3-threadpoolctl 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: python3-waitress python3-wcwidth python3-webob python3-websocket 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: python3-webtest python3-werkzeug python3-zc.lockfile sg3-utils 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout: sg3-utils-udev smartmontools socat xmlstarlet 2026-03-09T20:29:30.171 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:30.173 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T20:29:30.173 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T20:29:30.185 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:30.185 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:30.210 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T20:29:30.213 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:30.217 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:30.224 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:30.236 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:30.286 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T20:29:30.286 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T20:29:30.324 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T20:29:30.326 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:30.339 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:30.350 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:30.377 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:30.378 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:30.573 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:30.573 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:30.573 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:30.573 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:30.573 INFO:teuthology.orchestra.run.vm04.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet zip 2026-03-09T20:29:30.574 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:30.587 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T20:29:30.588 INFO:teuthology.orchestra.run.vm04.stdout: python3-cephfs* python3-rados* python3-rgw* 2026-03-09T20:29:30.770 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 3 to remove and 10 not upgraded. 2026-03-09T20:29:30.770 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 2062 kB disk space will be freed. 2026-03-09T20:29:30.802 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117434 files and directories currently installed.) 2026-03-09T20:29:30.804 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cephfs (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:30.815 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rgw (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:30.825 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rados (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:31.317 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:31.352 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:31.544 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:31.561 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:31.561 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:31.579 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:31.740 INFO:teuthology.orchestra.run.vm03.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-09T20:29:31.741 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:31.755 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:31.755 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:31.790 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:31.798 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:31.799 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:31.942 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:31.948 INFO:teuthology.orchestra.run.vm08.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T20:29:31.948 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:31.949 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:31.949 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:31.949 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:31.949 INFO:teuthology.orchestra.run.vm08.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:31.949 INFO:teuthology.orchestra.run.vm08.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:31.949 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:31.949 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:31.949 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:31.949 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:31.949 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:31.949 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:31.949 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:31.949 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:31.949 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:31.950 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:31.950 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:31.950 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:31.950 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet zip 2026-03-09T20:29:31.950 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:31.972 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:31.972 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:31.977 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:32.000 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:32.001 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:32.005 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:32.169 INFO:teuthology.orchestra.run.vm03.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T20:29:32.169 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:32.169 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:32.169 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:32.169 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-09T20:29:32.170 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:32.195 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:32.195 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:32.196 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:32.197 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:32.219 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:32.220 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:32.228 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:32.380 INFO:teuthology.orchestra.run.vm04.stdout:Package 'python3-rgw' is not installed, so not removed 2026-03-09T20:29:32.380 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:32.380 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:32.380 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:32.380 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:32.380 INFO:teuthology.orchestra.run.vm04.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:32.380 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:32.380 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:32.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:32.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:32.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:32.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:32.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:32.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:32.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:32.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:32.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:32.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:32.381 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:32.381 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet zip 2026-03-09T20:29:32.381 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:32.394 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:32.394 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:32.397 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:32.398 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:32.400 INFO:teuthology.orchestra.run.vm08.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T20:29:32.400 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:32.400 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:32.401 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:32.401 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:32.401 INFO:teuthology.orchestra.run.vm08.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:32.401 INFO:teuthology.orchestra.run.vm08.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:32.401 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:32.401 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:32.401 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:32.402 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:32.402 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:32.402 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:32.402 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:32.402 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:32.402 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:32.402 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:32.402 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:32.402 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:32.402 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet zip 2026-03-09T20:29:32.402 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:32.428 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:32.429 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:32.429 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:32.465 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:32.551 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:32.551 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:32.551 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:32.551 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:32.552 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:32.552 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:32.552 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:32.552 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:32.552 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:32.553 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:32.553 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:32.553 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:32.553 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:32.553 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:32.553 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:32.553 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:32.553 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:32.553 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:32.553 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-09T20:29:32.553 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:32.572 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T20:29:32.572 INFO:teuthology.orchestra.run.vm03.stdout: python3-rbd* 2026-03-09T20:29:32.601 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:32.601 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:32.614 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:32.614 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:32.749 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T20:29:32.749 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T20:29:32.791 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:32.791 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:32.791 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:32.791 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:32.792 INFO:teuthology.orchestra.run.vm08.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:32.792 INFO:teuthology.orchestra.run.vm08.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:32.792 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:32.792 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:32.792 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:32.792 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:32.793 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:32.793 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:32.793 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:32.793 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:32.793 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:32.793 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:32.793 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:32.793 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:32.793 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet zip 2026-03-09T20:29:32.793 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:32.801 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-09T20:29:32.803 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:32.812 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-09T20:29:32.812 INFO:teuthology.orchestra.run.vm08.stdout: python3-rbd* 2026-03-09T20:29:32.853 INFO:teuthology.orchestra.run.vm04.stdout:Package 'python3-cephfs' is not installed, so not removed 2026-03-09T20:29:32.853 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:32.853 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:32.853 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:32.854 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet zip 2026-03-09T20:29:32.855 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:32.880 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:32.880 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:32.919 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:33.013 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T20:29:33.013 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T20:29:33.053 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-09T20:29:33.055 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:33.141 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:33.142 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:33.270 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:33.270 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:33.270 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:33.270 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:33.270 INFO:teuthology.orchestra.run.vm04.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:33.270 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:33.270 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:33.270 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:33.271 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:33.271 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:33.271 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:33.271 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:33.271 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:33.271 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:33.271 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:33.271 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:33.271 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:33.271 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:33.271 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet zip 2026-03-09T20:29:33.271 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:33.278 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T20:29:33.278 INFO:teuthology.orchestra.run.vm04.stdout: python3-rbd* 2026-03-09T20:29:33.453 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 1 to remove and 10 not upgraded. 2026-03-09T20:29:33.453 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 1186 kB disk space will be freed. 2026-03-09T20:29:33.493 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117410 files and directories currently installed.) 2026-03-09T20:29:33.495 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rbd (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:33.944 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:33.978 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:34.183 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:34.184 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:34.185 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:34.220 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:34.388 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:34.389 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:34.389 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:34.389 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-09T20:29:34.390 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:34.406 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T20:29:34.407 INFO:teuthology.orchestra.run.vm03.stdout: libcephfs-dev* libcephfs2* 2026-03-09T20:29:34.438 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:34.438 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:34.606 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T20:29:34.606 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T20:29:34.626 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:34.651 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-09T20:29:34.654 INFO:teuthology.orchestra.run.vm03.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:34.664 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:34.666 INFO:teuthology.orchestra.run.vm03.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:34.691 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:29:34.708 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:34.709 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:34.709 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:34.709 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet zip 2026-03-09T20:29:34.710 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:34.725 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-09T20:29:34.725 INFO:teuthology.orchestra.run.vm08.stdout: libcephfs-dev* libcephfs2* 2026-03-09T20:29:34.875 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:34.876 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:34.912 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T20:29:34.912 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T20:29:34.952 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-09T20:29:34.955 INFO:teuthology.orchestra.run.vm08.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:34.967 INFO:teuthology.orchestra.run.vm08.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:34.993 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:35.001 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:35.002 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:35.002 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:35.002 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:35.002 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet zip 2026-03-09T20:29:35.002 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:35.015 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T20:29:35.016 INFO:teuthology.orchestra.run.vm04.stdout: libcephfs-dev* libcephfs2* 2026-03-09T20:29:35.195 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 2 to remove and 10 not upgraded. 2026-03-09T20:29:35.195 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 3202 kB disk space will be freed. 2026-03-09T20:29:35.238 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117402 files and directories currently installed.) 2026-03-09T20:29:35.240 INFO:teuthology.orchestra.run.vm04.stdout:Removing libcephfs-dev (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:35.252 INFO:teuthology.orchestra.run.vm04.stdout:Removing libcephfs2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:35.277 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:29:35.828 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:35.870 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:36.080 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:36.081 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:36.105 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:36.142 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:36.283 INFO:teuthology.orchestra.run.vm03.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T20:29:36.283 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:36.283 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:36.284 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:36.284 INFO:teuthology.orchestra.run.vm03.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:36.284 INFO:teuthology.orchestra.run.vm03.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:36.284 INFO:teuthology.orchestra.run.vm03.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:36.284 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:36.284 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:36.284 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:36.284 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:36.284 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:36.285 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:36.285 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:36.285 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:36.285 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:36.285 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:36.285 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:36.285 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:36.285 INFO:teuthology.orchestra.run.vm03.stdout: xmlstarlet zip 2026-03-09T20:29:36.285 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:36.307 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:36.308 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:36.333 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:36.334 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:36.339 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:36.383 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:36.417 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:36.524 INFO:teuthology.orchestra.run.vm08.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T20:29:36.524 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout: xmlstarlet zip 2026-03-09T20:29:36.525 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:36.539 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:36.539 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:36.550 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:36.551 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:36.571 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:36.631 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:36.631 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:36.752 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:36.752 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:36.752 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T20:29:36.753 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:36.769 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T20:29:36.770 INFO:teuthology.orchestra.run.vm03.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T20:29:36.770 INFO:teuthology.orchestra.run.vm03.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T20:29:36.784 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:36.785 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:36.885 INFO:teuthology.orchestra.run.vm04.stdout:Package 'libcephfs-dev' is not installed, so not removed 2026-03-09T20:29:36.885 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:36.885 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:36.885 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libjq1 liblua5.3-dev liboath0 libonig5 libpmemobj1 2026-03-09T20:29:36.885 INFO:teuthology.orchestra.run.vm04.stdout: libradosstriper1 librdkafka1 libreadline-dev librgw2 libsgutils2-2 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: libsqlite3-mod-ceph lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: pkg-config python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile sg3-utils sg3-utils-udev smartmontools socat unzip 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout: xmlstarlet zip 2026-03-09T20:29:36.886 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:36.912 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:36.912 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:36.945 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:36.961 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T20:29:36.961 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T20:29:37.005 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-09T20:29:37.006 INFO:teuthology.orchestra.run.vm03.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.015 INFO:teuthology.orchestra.run.vm03.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.018 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:37.018 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:37.018 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T20:29:37.018 INFO:teuthology.orchestra.run.vm08.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:29:37.018 INFO:teuthology.orchestra.run.vm08.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T20:29:37.019 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:37.026 INFO:teuthology.orchestra.run.vm03.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.035 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-09T20:29:37.035 INFO:teuthology.orchestra.run.vm08.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T20:29:37.035 INFO:teuthology.orchestra.run.vm08.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T20:29:37.036 INFO:teuthology.orchestra.run.vm03.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T20:29:37.160 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:37.160 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:37.215 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T20:29:37.215 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T20:29:37.254 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-09T20:29:37.257 INFO:teuthology.orchestra.run.vm08.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.270 INFO:teuthology.orchestra.run.vm08.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.283 INFO:teuthology.orchestra.run.vm08.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.294 INFO:teuthology.orchestra.run.vm08.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T20:29:37.327 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:37.327 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:37.327 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T20:29:37.327 INFO:teuthology.orchestra.run.vm04.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:29:37.327 INFO:teuthology.orchestra.run.vm04.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T20:29:37.327 INFO:teuthology.orchestra.run.vm04.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T20:29:37.327 INFO:teuthology.orchestra.run.vm04.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T20:29:37.327 INFO:teuthology.orchestra.run.vm04.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:37.327 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:37.327 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:37.328 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:37.328 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:37.328 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:37.328 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:37.328 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:37.328 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:37.328 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:37.328 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:37.328 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:37.328 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T20:29:37.328 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T20:29:37.328 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:37.337 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T20:29:37.337 INFO:teuthology.orchestra.run.vm04.stdout: librados2* libradosstriper1* librbd1* librgw2* libsqlite3-mod-ceph* 2026-03-09T20:29:37.337 INFO:teuthology.orchestra.run.vm04.stdout: qemu-block-extra* rbd-fuse* 2026-03-09T20:29:37.436 INFO:teuthology.orchestra.run.vm03.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.449 INFO:teuthology.orchestra.run.vm03.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.462 INFO:teuthology.orchestra.run.vm03.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.490 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:37.513 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 7 to remove and 10 not upgraded. 2026-03-09T20:29:37.513 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 51.6 MB disk space will be freed. 2026-03-09T20:29:37.526 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:29:37.555 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117387 files and directories currently installed.) 2026-03-09T20:29:37.556 INFO:teuthology.orchestra.run.vm04.stdout:Removing rbd-fuse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.569 INFO:teuthology.orchestra.run.vm04.stdout:Removing libsqlite3-mod-ceph (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.580 INFO:teuthology.orchestra.run.vm04.stdout:Removing libradosstriper1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.591 INFO:teuthology.orchestra.run.vm04.stdout:Removing qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T20:29:37.606 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T20:29:37.609 INFO:teuthology.orchestra.run.vm03.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T20:29:37.702 INFO:teuthology.orchestra.run.vm08.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.716 INFO:teuthology.orchestra.run.vm08.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.734 INFO:teuthology.orchestra.run.vm08.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:37.763 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:37.808 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:29:37.881 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T20:29:37.883 INFO:teuthology.orchestra.run.vm08.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T20:29:37.997 INFO:teuthology.orchestra.run.vm04.stdout:Removing librbd1 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:38.011 INFO:teuthology.orchestra.run.vm04.stdout:Removing librgw2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:38.024 INFO:teuthology.orchestra.run.vm04.stdout:Removing librados2 (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:38.049 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:38.083 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:29:38.156 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T20:29:38.159 INFO:teuthology.orchestra.run.vm04.stdout:Purging configuration files for qemu-block-extra (1:6.2+dfsg-2ubuntu6.28) ... 2026-03-09T20:29:39.220 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:39.255 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:39.452 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:39.452 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:39.460 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:39.495 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:39.588 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:39.589 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T20:29:39.589 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T20:29:39.589 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:39.607 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:39.607 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:39.640 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:39.641 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:39.675 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:39.692 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:39.693 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:39.828 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:39.828 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:39.847 INFO:teuthology.orchestra.run.vm08.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T20:29:39.847 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:39.847 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:39.847 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T20:29:39.847 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:39.848 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:39.876 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:39.876 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:39.909 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:40.004 INFO:teuthology.orchestra.run.vm04.stdout:Package 'librbd1' is not installed, so not removed 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T20:29:40.005 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:40.022 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:40.023 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:40.044 INFO:teuthology.orchestra.run.vm03.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T20:29:40.044 INFO:teuthology.orchestra.run.vm03.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:40.044 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:40.044 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T20:29:40.044 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:29:40.044 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T20:29:40.045 INFO:teuthology.orchestra.run.vm03.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:40.055 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:40.071 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:40.071 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:40.073 DEBUG:teuthology.orchestra.run.vm03:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T20:29:40.118 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:40.118 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:40.132 DEBUG:teuthology.orchestra.run.vm03:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T20:29:40.208 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:40.266 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:40.267 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:40.319 INFO:teuthology.orchestra.run.vm08.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T20:29:40.319 INFO:teuthology.orchestra.run.vm08.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:40.319 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:40.320 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T20:29:40.320 INFO:teuthology.orchestra.run.vm08.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:29:40.320 INFO:teuthology.orchestra.run.vm08.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T20:29:40.320 INFO:teuthology.orchestra.run.vm08.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T20:29:40.320 INFO:teuthology.orchestra.run.vm08.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T20:29:40.321 INFO:teuthology.orchestra.run.vm08.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:40.348 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:40.349 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:40.351 DEBUG:teuthology.orchestra.run.vm08:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T20:29:40.407 DEBUG:teuthology.orchestra.run.vm08:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T20:29:40.413 INFO:teuthology.orchestra.run.vm04.stdout:Package 'rbd-fuse' is not installed, so not removed 2026-03-09T20:29:40.413 INFO:teuthology.orchestra.run.vm04.stdout:The following packages were automatically installed and are no longer required: 2026-03-09T20:29:40.413 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:40.413 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T20:29:40.413 INFO:teuthology.orchestra.run.vm04.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:29:40.413 INFO:teuthology.orchestra.run.vm04.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T20:29:40.414 INFO:teuthology.orchestra.run.vm04.stdout:Use 'sudo apt autoremove' to remove them. 2026-03-09T20:29:40.418 INFO:teuthology.orchestra.run.vm03.stdout:Building dependency tree... 2026-03-09T20:29:40.418 INFO:teuthology.orchestra.run.vm03.stdout:Reading state information... 2026-03-09T20:29:40.441 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 0 to remove and 10 not upgraded. 2026-03-09T20:29:40.441 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:40.443 DEBUG:teuthology.orchestra.run.vm04:> dpkg -l | grep '^.\(U\|H\)R' | awk '{print $2}' | sudo xargs --no-run-if-empty dpkg -P --force-remove-reinstreq 2026-03-09T20:29:40.483 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:40.498 DEBUG:teuthology.orchestra.run.vm04:> sudo DEBIAN_FRONTEND=noninteractive apt-get -y --force-yes -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" autoremove 2026-03-09T20:29:40.578 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:40.642 INFO:teuthology.orchestra.run.vm03.stdout:The following packages will be REMOVED: 2026-03-09T20:29:40.642 INFO:teuthology.orchestra.run.vm03.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:40.642 INFO:teuthology.orchestra.run.vm03.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T20:29:40.643 INFO:teuthology.orchestra.run.vm03.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:29:40.643 INFO:teuthology.orchestra.run.vm03.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T20:29:40.643 INFO:teuthology.orchestra.run.vm03.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T20:29:40.643 INFO:teuthology.orchestra.run.vm03.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T20:29:40.644 INFO:teuthology.orchestra.run.vm03.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:40.644 INFO:teuthology.orchestra.run.vm03.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:40.644 INFO:teuthology.orchestra.run.vm03.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:40.644 INFO:teuthology.orchestra.run.vm03.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:40.644 INFO:teuthology.orchestra.run.vm03.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:40.644 INFO:teuthology.orchestra.run.vm03.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:40.644 INFO:teuthology.orchestra.run.vm03.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:40.644 INFO:teuthology.orchestra.run.vm03.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:40.644 INFO:teuthology.orchestra.run.vm03.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:40.644 INFO:teuthology.orchestra.run.vm03.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:40.644 INFO:teuthology.orchestra.run.vm03.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:40.644 INFO:teuthology.orchestra.run.vm03.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:40.644 INFO:teuthology.orchestra.run.vm03.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T20:29:40.644 INFO:teuthology.orchestra.run.vm03.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T20:29:40.683 INFO:teuthology.orchestra.run.vm08.stdout:Building dependency tree... 2026-03-09T20:29:40.683 INFO:teuthology.orchestra.run.vm08.stdout:Reading state information... 2026-03-09T20:29:40.727 INFO:teuthology.orchestra.run.vm04.stdout:Building dependency tree... 2026-03-09T20:29:40.727 INFO:teuthology.orchestra.run.vm04.stdout:Reading state information... 2026-03-09T20:29:40.832 INFO:teuthology.orchestra.run.vm03.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-09T20:29:40.832 INFO:teuthology.orchestra.run.vm03.stdout:After this operation, 107 MB disk space will be freed. 2026-03-09T20:29:40.870 INFO:teuthology.orchestra.run.vm03.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T20:29:40.872 INFO:teuthology.orchestra.run.vm03.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:40.875 INFO:teuthology.orchestra.run.vm04.stdout:The following packages will be REMOVED: 2026-03-09T20:29:40.875 INFO:teuthology.orchestra.run.vm04.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:40.875 INFO:teuthology.orchestra.run.vm04.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T20:29:40.875 INFO:teuthology.orchestra.run.vm04.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:29:40.875 INFO:teuthology.orchestra.run.vm04.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T20:29:40.876 INFO:teuthology.orchestra.run.vm04.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T20:29:40.886 INFO:teuthology.orchestra.run.vm03.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T20:29:40.895 INFO:teuthology.orchestra.run.vm08.stdout:The following packages will be REMOVED: 2026-03-09T20:29:40.895 INFO:teuthology.orchestra.run.vm08.stdout: ceph-mgr-modules-core jq kpartx libboost-iostreams1.74.0 2026-03-09T20:29:40.895 INFO:teuthology.orchestra.run.vm08.stdout: libboost-thread1.74.0 libdouble-conversion3 libfuse2 libgfapi0 libgfrpc0 2026-03-09T20:29:40.895 INFO:teuthology.orchestra.run.vm08.stdout: libgfxdr0 libglusterfs0 libiscsi7 libjq1 liblttng-ust1 liblua5.3-dev libnbd0 2026-03-09T20:29:40.895 INFO:teuthology.orchestra.run.vm08.stdout: liboath0 libonig5 libpcre2-16-0 libpmemobj1 libqt5core5a libqt5dbus5 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: libqt5network5 librdkafka1 libreadline-dev libsgutils2-2 libthrift-0.16.0 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm03.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: lua-any lua-sec lua-socket lua5.1 luarocks nvme-cli pkg-config 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: python-asyncssh-doc python-pastedeploy-tpl python3-asyncssh 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: python3-cachetools python3-ceph-argparse python3-ceph-common python3-cheroot 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: python3-cherrypy3 python3-google-auth python3-jaraco.classes 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: python3-jaraco.collections python3-jaraco.functools python3-jaraco.text 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: python3-joblib python3-kubernetes python3-logutils python3-mako 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: python3-natsort python3-paste python3-pastedeploy python3-pastescript 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: python3-pecan python3-portend python3-prettytable python3-psutil 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: python3-pyinotify python3-repoze.lru python3-requests-oauthlib 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: python3-routes python3-rsa python3-simplegeneric python3-simplejson 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: python3-singledispatch python3-sklearn python3-sklearn-lib python3-tempita 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: python3-tempora python3-threadpoolctl python3-waitress python3-wcwidth 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: python3-webob python3-websocket python3-webtest python3-werkzeug 2026-03-09T20:29:40.896 INFO:teuthology.orchestra.run.vm08.stdout: python3-zc.lockfile qttranslations5-l10n sg3-utils sg3-utils-udev 2026-03-09T20:29:40.897 INFO:teuthology.orchestra.run.vm08.stdout: smartmontools socat unzip xmlstarlet zip 2026-03-09T20:29:40.905 INFO:teuthology.orchestra.run.vm03.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T20:29:40.915 INFO:teuthology.orchestra.run.vm03.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T20:29:40.926 INFO:teuthology.orchestra.run.vm03.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T20:29:40.937 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:29:40.948 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:29:40.959 INFO:teuthology.orchestra.run.vm03.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:29:40.979 INFO:teuthology.orchestra.run.vm03.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T20:29:40.990 INFO:teuthology.orchestra.run.vm03.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T20:29:41.001 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T20:29:41.010 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T20:29:41.018 INFO:teuthology.orchestra.run.vm03.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T20:29:41.028 INFO:teuthology.orchestra.run.vm03.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T20:29:41.036 INFO:teuthology.orchestra.run.vm03.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T20:29:41.039 INFO:teuthology.orchestra.run.vm04.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-09T20:29:41.039 INFO:teuthology.orchestra.run.vm04.stdout:After this operation, 107 MB disk space will be freed. 2026-03-09T20:29:41.045 INFO:teuthology.orchestra.run.vm03.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T20:29:41.054 INFO:teuthology.orchestra.run.vm03.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T20:29:41.065 INFO:teuthology.orchestra.run.vm03.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T20:29:41.074 INFO:teuthology.orchestra.run.vm08.stdout:0 upgraded, 0 newly installed, 87 to remove and 10 not upgraded. 2026-03-09T20:29:41.074 INFO:teuthology.orchestra.run.vm08.stdout:After this operation, 107 MB disk space will be freed. 2026-03-09T20:29:41.075 INFO:teuthology.orchestra.run.vm04.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T20:29:41.076 INFO:teuthology.orchestra.run.vm04.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:41.090 INFO:teuthology.orchestra.run.vm03.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T20:29:41.091 INFO:teuthology.orchestra.run.vm04.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T20:29:41.099 INFO:teuthology.orchestra.run.vm03.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T20:29:41.101 INFO:teuthology.orchestra.run.vm04.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T20:29:41.109 INFO:teuthology.orchestra.run.vm03.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T20:29:41.111 INFO:teuthology.orchestra.run.vm04.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T20:29:41.114 INFO:teuthology.orchestra.run.vm08.stdout:(Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 117336 files and directories currently installed.) 2026-03-09T20:29:41.117 INFO:teuthology.orchestra.run.vm08.stdout:Removing ceph-mgr-modules-core (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:41.118 INFO:teuthology.orchestra.run.vm03.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T20:29:41.120 INFO:teuthology.orchestra.run.vm04.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T20:29:41.128 INFO:teuthology.orchestra.run.vm03.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T20:29:41.131 INFO:teuthology.orchestra.run.vm04.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T20:29:41.134 INFO:teuthology.orchestra.run.vm08.stdout:Removing jq (1.6-2.1ubuntu3.1) ... 2026-03-09T20:29:41.137 INFO:teuthology.orchestra.run.vm03.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T20:29:41.142 INFO:teuthology.orchestra.run.vm04.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:29:41.146 INFO:teuthology.orchestra.run.vm03.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T20:29:41.148 INFO:teuthology.orchestra.run.vm08.stdout:Removing kpartx (0.8.8-1ubuntu1.22.04.4) ... 2026-03-09T20:29:41.153 INFO:teuthology.orchestra.run.vm04.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:29:41.156 INFO:teuthology.orchestra.run.vm03.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T20:29:41.160 INFO:teuthology.orchestra.run.vm08.stdout:Removing libboost-iostreams1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T20:29:41.164 INFO:teuthology.orchestra.run.vm04.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:29:41.167 INFO:teuthology.orchestra.run.vm03.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-09T20:29:41.172 INFO:teuthology.orchestra.run.vm08.stdout:Removing libboost-thread1.74.0:amd64 (1.74.0-14ubuntu3) ... 2026-03-09T20:29:41.175 INFO:teuthology.orchestra.run.vm03.stdout:update-initramfs: deferring update (trigger activated) 2026-03-09T20:29:41.183 INFO:teuthology.orchestra.run.vm04.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T20:29:41.183 INFO:teuthology.orchestra.run.vm03.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-09T20:29:41.184 INFO:teuthology.orchestra.run.vm08.stdout:Removing libthrift-0.16.0:amd64 (0.16.0-2) ... 2026-03-09T20:29:41.194 INFO:teuthology.orchestra.run.vm04.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T20:29:41.195 INFO:teuthology.orchestra.run.vm08.stdout:Removing libqt5network5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:29:41.201 INFO:teuthology.orchestra.run.vm03.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-09T20:29:41.205 INFO:teuthology.orchestra.run.vm04.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T20:29:41.207 INFO:teuthology.orchestra.run.vm08.stdout:Removing libqt5dbus5:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:29:41.212 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T20:29:41.215 INFO:teuthology.orchestra.run.vm04.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T20:29:41.218 INFO:teuthology.orchestra.run.vm08.stdout:Removing libqt5core5a:amd64 (5.15.3+dfsg-2ubuntu0.2) ... 2026-03-09T20:29:41.221 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T20:29:41.226 INFO:teuthology.orchestra.run.vm04.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T20:29:41.231 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T20:29:41.239 INFO:teuthology.orchestra.run.vm04.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T20:29:41.240 INFO:teuthology.orchestra.run.vm08.stdout:Removing libdouble-conversion3:amd64 (3.1.7-4) ... 2026-03-09T20:29:41.244 INFO:teuthology.orchestra.run.vm03.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T20:29:41.251 INFO:teuthology.orchestra.run.vm04.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T20:29:41.253 INFO:teuthology.orchestra.run.vm08.stdout:Removing libfuse2:amd64 (2.9.9-5ubuntu3) ... 2026-03-09T20:29:41.260 INFO:teuthology.orchestra.run.vm03.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T20:29:41.262 INFO:teuthology.orchestra.run.vm04.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T20:29:41.264 INFO:teuthology.orchestra.run.vm08.stdout:Removing libgfapi0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T20:29:41.272 INFO:teuthology.orchestra.run.vm04.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T20:29:41.275 INFO:teuthology.orchestra.run.vm08.stdout:Removing libgfrpc0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T20:29:41.283 INFO:teuthology.orchestra.run.vm04.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T20:29:41.288 INFO:teuthology.orchestra.run.vm08.stdout:Removing libgfxdr0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T20:29:41.300 INFO:teuthology.orchestra.run.vm08.stdout:Removing libglusterfs0:amd64 (10.1-1ubuntu0.2) ... 2026-03-09T20:29:41.307 INFO:teuthology.orchestra.run.vm04.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T20:29:41.312 INFO:teuthology.orchestra.run.vm08.stdout:Removing libiscsi7:amd64 (1.19.0-3build2) ... 2026-03-09T20:29:41.316 INFO:teuthology.orchestra.run.vm04.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T20:29:41.325 INFO:teuthology.orchestra.run.vm08.stdout:Removing libjq1:amd64 (1.6-2.1ubuntu3.1) ... 2026-03-09T20:29:41.326 INFO:teuthology.orchestra.run.vm04.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T20:29:41.335 INFO:teuthology.orchestra.run.vm04.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T20:29:41.337 INFO:teuthology.orchestra.run.vm08.stdout:Removing liblttng-ust1:amd64 (2.13.1-1ubuntu1) ... 2026-03-09T20:29:41.345 INFO:teuthology.orchestra.run.vm04.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T20:29:41.348 INFO:teuthology.orchestra.run.vm08.stdout:Removing luarocks (3.8.0+dfsg1-1) ... 2026-03-09T20:29:41.356 INFO:teuthology.orchestra.run.vm04.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T20:29:41.367 INFO:teuthology.orchestra.run.vm04.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T20:29:41.375 INFO:teuthology.orchestra.run.vm08.stdout:Removing liblua5.3-dev:amd64 (5.3.6-1build1) ... 2026-03-09T20:29:41.378 INFO:teuthology.orchestra.run.vm04.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T20:29:41.387 INFO:teuthology.orchestra.run.vm08.stdout:Removing libnbd0 (1.10.5-1) ... 2026-03-09T20:29:41.388 INFO:teuthology.orchestra.run.vm04.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-09T20:29:41.395 INFO:teuthology.orchestra.run.vm04.stdout:update-initramfs: deferring update (trigger activated) 2026-03-09T20:29:41.399 INFO:teuthology.orchestra.run.vm08.stdout:Removing liboath0:amd64 (2.6.7-3ubuntu0.1) ... 2026-03-09T20:29:41.404 INFO:teuthology.orchestra.run.vm04.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-09T20:29:41.410 INFO:teuthology.orchestra.run.vm08.stdout:Removing libonig5:amd64 (6.9.7.1-2build1) ... 2026-03-09T20:29:41.420 INFO:teuthology.orchestra.run.vm08.stdout:Removing libpcre2-16-0:amd64 (10.39-3ubuntu0.1) ... 2026-03-09T20:29:41.421 INFO:teuthology.orchestra.run.vm04.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-09T20:29:41.432 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T20:29:41.432 INFO:teuthology.orchestra.run.vm08.stdout:Removing libpmemobj1:amd64 (1.11.1-3build1) ... 2026-03-09T20:29:41.444 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T20:29:41.444 INFO:teuthology.orchestra.run.vm08.stdout:Removing librdkafka1:amd64 (1.8.0-1build1) ... 2026-03-09T20:29:41.455 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T20:29:41.457 INFO:teuthology.orchestra.run.vm08.stdout:Removing libreadline-dev:amd64 (8.1.2-1) ... 2026-03-09T20:29:41.467 INFO:teuthology.orchestra.run.vm04.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T20:29:41.468 INFO:teuthology.orchestra.run.vm08.stdout:Removing sg3-utils-udev (1.46-1ubuntu0.22.04.1) ... 2026-03-09T20:29:41.476 INFO:teuthology.orchestra.run.vm08.stdout:update-initramfs: deferring update (trigger activated) 2026-03-09T20:29:41.484 INFO:teuthology.orchestra.run.vm04.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T20:29:41.489 INFO:teuthology.orchestra.run.vm08.stdout:Removing sg3-utils (1.46-1ubuntu0.22.04.1) ... 2026-03-09T20:29:41.510 INFO:teuthology.orchestra.run.vm08.stdout:Removing libsgutils2-2:amd64 (1.46-1ubuntu0.22.04.1) ... 2026-03-09T20:29:41.523 INFO:teuthology.orchestra.run.vm08.stdout:Removing lua-any (27ubuntu1) ... 2026-03-09T20:29:41.535 INFO:teuthology.orchestra.run.vm08.stdout:Removing lua-sec:amd64 (1.0.2-1) ... 2026-03-09T20:29:41.547 INFO:teuthology.orchestra.run.vm08.stdout:Removing lua-socket:amd64 (3.0~rc1+git+ac3201d-6) ... 2026-03-09T20:29:41.563 INFO:teuthology.orchestra.run.vm08.stdout:Removing lua5.1 (5.1.5-8.1build4) ... 2026-03-09T20:29:41.579 INFO:teuthology.orchestra.run.vm08.stdout:Removing nvme-cli (1.16-3ubuntu0.3) ... 2026-03-09T20:29:41.683 INFO:teuthology.orchestra.run.vm03.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T20:29:41.715 INFO:teuthology.orchestra.run.vm03.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T20:29:41.739 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T20:29:41.797 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T20:29:41.843 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T20:29:41.880 INFO:teuthology.orchestra.run.vm04.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T20:29:41.893 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T20:29:41.912 INFO:teuthology.orchestra.run.vm04.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T20:29:41.936 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T20:29:41.941 INFO:teuthology.orchestra.run.vm03.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T20:29:41.946 INFO:teuthology.orchestra.run.vm08.stdout:Removing pkg-config (0.29.2-1ubuntu3) ... 2026-03-09T20:29:41.951 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T20:29:41.980 INFO:teuthology.orchestra.run.vm08.stdout:Removing python-asyncssh-doc (2.5.0-1ubuntu0.1) ... 2026-03-09T20:29:41.997 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T20:29:42.005 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-pecan (1.3.3-4ubuntu2) ... 2026-03-09T20:29:42.006 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T20:29:42.043 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T20:29:42.065 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-webtest (2.0.35-1) ... 2026-03-09T20:29:42.094 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T20:29:42.113 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-pastescript (2.0.2-4) ... 2026-03-09T20:29:42.142 INFO:teuthology.orchestra.run.vm04.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T20:29:42.152 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T20:29:42.173 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-pastedeploy (2.1.1-1) ... 2026-03-09T20:29:42.209 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T20:29:42.225 INFO:teuthology.orchestra.run.vm08.stdout:Removing python-pastedeploy-tpl (2.1.1-1) ... 2026-03-09T20:29:42.238 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-asyncssh (2.5.0-1ubuntu0.1) ... 2026-03-09T20:29:42.271 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T20:29:42.294 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-kubernetes (12.0.1-1ubuntu1) ... 2026-03-09T20:29:42.323 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T20:29:42.369 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:42.414 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:42.461 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T20:29:42.470 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T20:29:42.519 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T20:29:42.521 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T20:29:42.561 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-google-auth (1.5.1-3) ... 2026-03-09T20:29:42.570 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:42.571 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T20:29:42.616 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-cachetools (5.0.0-1) ... 2026-03-09T20:29:42.621 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T20:29:42.622 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:42.667 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-ceph-argparse (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:42.672 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T20:29:42.673 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T20:29:42.723 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-ceph-common (19.2.3-678-ge911bdeb-1jammy) ... 2026-03-09T20:29:42.724 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T20:29:42.734 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T20:29:42.775 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T20:29:42.780 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-cherrypy3 (18.6.1-4) ... 2026-03-09T20:29:42.785 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T20:29:42.829 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T20:29:42.832 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T20:29:42.844 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-cheroot (8.5.2+ds1-1ubuntu3.1) ... 2026-03-09T20:29:42.878 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T20:29:42.882 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T20:29:42.906 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-jaraco.collections (3.4.0-2) ... 2026-03-09T20:29:42.934 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T20:29:42.959 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-jaraco.classes (3.2.1-3) ... 2026-03-09T20:29:42.982 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T20:29:43.009 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T20:29:43.011 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-portend (3.0.0-1) ... 2026-03-09T20:29:43.030 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T20:29:43.061 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-tempora (4.1.2-1) ... 2026-03-09T20:29:43.073 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T20:29:43.078 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T20:29:43.112 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-jaraco.text (3.6.0-2) ... 2026-03-09T20:29:43.122 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T20:29:43.163 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-jaraco.functools (3.4.0-2) ... 2026-03-09T20:29:43.171 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T20:29:43.202 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T20:29:43.213 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-sklearn (0.23.2-5ubuntu6) ... 2026-03-09T20:29:43.220 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T20:29:43.264 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T20:29:43.281 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T20:29:43.315 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T20:29:43.325 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T20:29:43.342 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-joblib (0.17.0-4ubuntu1) ... 2026-03-09T20:29:43.367 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T20:29:43.375 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T20:29:43.407 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-logutils (0.3.3-8) ... 2026-03-09T20:29:43.420 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T20:29:43.426 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T20:29:43.455 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-mako (1.1.3+ds1-2ubuntu0.1) ... 2026-03-09T20:29:43.481 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T20:29:43.486 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T20:29:43.507 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-natsort (8.0.2-1) ... 2026-03-09T20:29:43.532 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T20:29:43.540 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T20:29:43.559 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-paste (3.5.0+dfsg1-1) ... 2026-03-09T20:29:43.585 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T20:29:43.599 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T20:29:43.624 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-prettytable (2.5.0-2) ... 2026-03-09T20:29:43.639 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T20:29:43.654 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T20:29:43.678 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-psutil (5.9.0-1build1) ... 2026-03-09T20:29:43.690 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T20:29:43.705 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T20:29:43.733 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-pyinotify (0.9.6-1.3) ... 2026-03-09T20:29:43.744 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T20:29:43.761 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T20:29:43.785 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-routes (2.5.1-1ubuntu1) ... 2026-03-09T20:29:43.793 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T20:29:43.813 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T20:29:43.819 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T20:29:43.840 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-repoze.lru (0.7-2) ... 2026-03-09T20:29:43.866 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T20:29:43.868 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T20:29:43.893 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-requests-oauthlib (1.3.0+ds-0.1) ... 2026-03-09T20:29:43.914 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T20:29:43.918 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T20:29:43.945 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-rsa (4.8-1) ... 2026-03-09T20:29:43.961 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T20:29:43.974 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T20:29:44.000 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-simplegeneric (0.8.1-3) ... 2026-03-09T20:29:44.008 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T20:29:44.023 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T20:29:44.049 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T20:29:44.051 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-simplejson (3.17.6-1build1) ... 2026-03-09T20:29:44.056 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T20:29:44.097 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T20:29:44.106 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-singledispatch (3.4.0.3-3) ... 2026-03-09T20:29:44.108 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T20:29:44.140 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T20:29:44.154 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-sklearn-lib:amd64 (0.23.2-5ubuntu6) ... 2026-03-09T20:29:44.161 INFO:teuthology.orchestra.run.vm03.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T20:29:44.180 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-tempita (0.5.2-6ubuntu1) ... 2026-03-09T20:29:44.189 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T20:29:44.204 INFO:teuthology.orchestra.run.vm03.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T20:29:44.225 INFO:teuthology.orchestra.run.vm03.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T20:29:44.230 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-threadpoolctl (3.1.0-1) ... 2026-03-09T20:29:44.238 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T20:29:44.276 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-waitress (1.4.4-1.1ubuntu1.1) ... 2026-03-09T20:29:44.292 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T20:29:44.324 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-wcwidth (0.2.5+dfsg1-1) ... 2026-03-09T20:29:44.349 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T20:29:44.374 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-webob (1:1.8.6-1.1ubuntu0.1) ... 2026-03-09T20:29:44.401 INFO:teuthology.orchestra.run.vm04.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T20:29:44.428 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-websocket (1.2.3-1) ... 2026-03-09T20:29:44.453 INFO:teuthology.orchestra.run.vm04.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T20:29:44.475 INFO:teuthology.orchestra.run.vm04.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T20:29:44.480 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-werkzeug (2.0.2+dfsg1-1ubuntu0.22.04.3) ... 2026-03-09T20:29:44.532 INFO:teuthology.orchestra.run.vm08.stdout:Removing python3-zc.lockfile (2.0-1) ... 2026-03-09T20:29:44.580 INFO:teuthology.orchestra.run.vm08.stdout:Removing qttranslations5-l10n (5.15.3-1) ... 2026-03-09T20:29:44.601 INFO:teuthology.orchestra.run.vm08.stdout:Removing smartmontools (7.2-1ubuntu0.1) ... 2026-03-09T20:29:44.655 INFO:teuthology.orchestra.run.vm03.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T20:29:44.667 INFO:teuthology.orchestra.run.vm03.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T20:29:44.686 INFO:teuthology.orchestra.run.vm03.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T20:29:44.702 INFO:teuthology.orchestra.run.vm03.stdout:Removing zip (3.0-12build2) ... 2026-03-09T20:29:44.729 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:29:44.739 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:44.782 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T20:29:44.789 INFO:teuthology.orchestra.run.vm03.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-09T20:29:44.805 INFO:teuthology.orchestra.run.vm03.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-09T20:29:44.887 INFO:teuthology.orchestra.run.vm04.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T20:29:44.899 INFO:teuthology.orchestra.run.vm04.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T20:29:44.918 INFO:teuthology.orchestra.run.vm04.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T20:29:44.935 INFO:teuthology.orchestra.run.vm04.stdout:Removing zip (3.0-12build2) ... 2026-03-09T20:29:44.960 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:29:44.971 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:45.019 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T20:29:45.027 INFO:teuthology.orchestra.run.vm04.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-09T20:29:45.038 INFO:teuthology.orchestra.run.vm08.stdout:Removing socat (1.7.4.1-3ubuntu4) ... 2026-03-09T20:29:45.045 INFO:teuthology.orchestra.run.vm04.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-09T20:29:45.049 INFO:teuthology.orchestra.run.vm08.stdout:Removing unzip (6.0-26ubuntu3.2) ... 2026-03-09T20:29:45.071 INFO:teuthology.orchestra.run.vm08.stdout:Removing xmlstarlet (1.6.1-2.1) ... 2026-03-09T20:29:45.089 INFO:teuthology.orchestra.run.vm08.stdout:Removing zip (3.0-12build2) ... 2026-03-09T20:29:45.116 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for libc-bin (2.35-0ubuntu3.13) ... 2026-03-09T20:29:45.127 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for man-db (2.10.2-1) ... 2026-03-09T20:29:45.174 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for mailcap (3.70+nmu1ubuntu1) ... 2026-03-09T20:29:45.183 INFO:teuthology.orchestra.run.vm08.stdout:Processing triggers for initramfs-tools (0.140ubuntu13.5) ... 2026-03-09T20:29:45.201 INFO:teuthology.orchestra.run.vm08.stdout:update-initramfs: Generating /boot/initrd.img-5.15.0-1092-kvm 2026-03-09T20:29:46.372 INFO:teuthology.orchestra.run.vm03.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-09T20:29:46.372 INFO:teuthology.orchestra.run.vm03.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-09T20:29:46.577 INFO:teuthology.orchestra.run.vm04.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-09T20:29:46.578 INFO:teuthology.orchestra.run.vm04.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-09T20:29:46.788 INFO:teuthology.orchestra.run.vm08.stdout:W: mkconf: MD subsystem is not loaded, thus I cannot scan for arrays. 2026-03-09T20:29:46.788 INFO:teuthology.orchestra.run.vm08.stdout:W: mdadm: failed to auto-generate temporary mdadm.conf file. 2026-03-09T20:29:48.408 INFO:teuthology.orchestra.run.vm03.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:48.411 DEBUG:teuthology.parallel:result is None 2026-03-09T20:29:48.519 INFO:teuthology.orchestra.run.vm04.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:48.522 DEBUG:teuthology.parallel:result is None 2026-03-09T20:29:48.899 INFO:teuthology.orchestra.run.vm08.stderr:W: --force-yes is deprecated, use one of the options starting with --allow instead. 2026-03-09T20:29:48.902 DEBUG:teuthology.parallel:result is None 2026-03-09T20:29:48.902 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm03.local 2026-03-09T20:29:48.902 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm04.local 2026-03-09T20:29:48.902 INFO:teuthology.task.install:Removing ceph sources lists on ubuntu@vm08.local 2026-03-09T20:29:48.902 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T20:29:48.902 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T20:29:48.902 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f /etc/apt/sources.list.d/ceph.list 2026-03-09T20:29:48.911 DEBUG:teuthology.orchestra.run.vm03:> sudo apt-get update 2026-03-09T20:29:48.911 DEBUG:teuthology.orchestra.run.vm04:> sudo apt-get update 2026-03-09T20:29:48.953 DEBUG:teuthology.orchestra.run.vm08:> sudo apt-get update 2026-03-09T20:29:49.186 INFO:teuthology.orchestra.run.vm03.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T20:29:49.187 INFO:teuthology.orchestra.run.vm04.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T20:29:49.187 INFO:teuthology.orchestra.run.vm08.stdout:Hit:1 https://archive.ubuntu.com/ubuntu jammy InRelease 2026-03-09T20:29:49.190 INFO:teuthology.orchestra.run.vm03.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T20:29:49.192 INFO:teuthology.orchestra.run.vm08.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T20:29:49.192 INFO:teuthology.orchestra.run.vm04.stdout:Hit:2 https://archive.ubuntu.com/ubuntu jammy-updates InRelease 2026-03-09T20:29:49.198 INFO:teuthology.orchestra.run.vm03.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T20:29:49.199 INFO:teuthology.orchestra.run.vm04.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T20:29:49.200 INFO:teuthology.orchestra.run.vm08.stdout:Hit:3 https://archive.ubuntu.com/ubuntu jammy-backports InRelease 2026-03-09T20:29:49.210 INFO:teuthology.orchestra.run.vm04.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T20:29:49.218 INFO:teuthology.orchestra.run.vm03.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T20:29:49.515 INFO:teuthology.orchestra.run.vm08.stdout:Hit:4 https://security.ubuntu.com/ubuntu jammy-security InRelease 2026-03-09T20:29:50.188 INFO:teuthology.orchestra.run.vm03.stdout:Reading package lists... 2026-03-09T20:29:50.201 DEBUG:teuthology.parallel:result is None 2026-03-09T20:29:50.207 INFO:teuthology.orchestra.run.vm04.stdout:Reading package lists... 2026-03-09T20:29:50.219 DEBUG:teuthology.parallel:result is None 2026-03-09T20:29:50.353 INFO:teuthology.orchestra.run.vm08.stdout:Reading package lists... 2026-03-09T20:29:50.368 DEBUG:teuthology.parallel:result is None 2026-03-09T20:29:50.368 DEBUG:teuthology.run_tasks:Unwinding manager clock 2026-03-09T20:29:50.370 INFO:teuthology.task.clock:Checking final clock skew... 2026-03-09T20:29:50.370 DEBUG:teuthology.orchestra.run.vm03:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T20:29:50.372 DEBUG:teuthology.orchestra.run.vm04:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T20:29:50.373 DEBUG:teuthology.orchestra.run.vm08:> PATH=/usr/bin:/usr/sbin ntpq -p || PATH=/usr/bin:/usr/sbin chronyc sources || true 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout:============================================================================== 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout:+v22025082392863 129.69.253.1 2 u 54 64 377 28.234 +0.244 3.505 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout:+mail.anyvm.tech 66.249.115.192 3 u 45 64 377 23.572 -2.421 2.742 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout:#home.of.the.smi .LIgp. 1 u 45 64 377 41.332 +3.074 1.827 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout:#158.101.188.125 189.97.54.122 2 u 55 64 377 21.031 +2.226 3.529 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout:+ntp2.adminforge 131.188.3.220 2 u 51 64 377 25.031 -0.394 1.865 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout:+cp.hypermediaa. 189.97.54.122 2 u 48 64 377 25.050 -1.471 2.269 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout:*ntp0.rrze.uni-e .GPS. 1 u 40 64 377 26.249 -2.935 2.368 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout:+vps-fra1.orlean 195.145.119.188 2 u 48 64 377 22.000 -1.424 5.164 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout:+server1a.meinbe 124.216.164.14 2 u 49 64 377 24.998 -0.098 1.897 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout:+185.252.140.126 218.73.139.35 2 u 49 64 377 25.105 -0.170 1.920 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout:-185.125.190.57 194.121.207.249 2 u 64 64 377 35.317 -3.037 2.470 2026-03-09T20:29:50.505 INFO:teuthology.orchestra.run.vm04.stdout:+141.144.246.224 146.131.121.246 2 u 38 64 377 29.151 -0.810 3.246 2026-03-09T20:29:50.516 INFO:teuthology.orchestra.run.vm03.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T20:29:50.516 INFO:teuthology.orchestra.run.vm03.stdout:============================================================================== 2026-03-09T20:29:50.516 INFO:teuthology.orchestra.run.vm03.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.516 INFO:teuthology.orchestra.run.vm03.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.517 INFO:teuthology.orchestra.run.vm03.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.517 INFO:teuthology.orchestra.run.vm03.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.517 INFO:teuthology.orchestra.run.vm03.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.517 INFO:teuthology.orchestra.run.vm03.stdout:-158.101.188.125 189.97.54.122 2 u 50 64 377 21.014 -0.132 0.205 2026-03-09T20:29:50.517 INFO:teuthology.orchestra.run.vm03.stdout:+mail.anyvm.tech 66.249.115.192 3 u 51 64 377 23.491 +0.090 0.177 2026-03-09T20:29:50.517 INFO:teuthology.orchestra.run.vm03.stdout:-185.252.140.126 218.73.139.35 2 u 52 64 377 25.074 +0.655 0.210 2026-03-09T20:29:50.517 INFO:teuthology.orchestra.run.vm03.stdout:-47.ip-51-75-67. 225.254.30.190 4 u 53 64 377 21.181 +1.679 0.156 2026-03-09T20:29:50.517 INFO:teuthology.orchestra.run.vm03.stdout:-vps-fra1.orlean 195.145.119.188 2 u 48 64 377 21.965 +0.461 4.375 2026-03-09T20:29:50.517 INFO:teuthology.orchestra.run.vm03.stdout:-server1a.meinbe 124.216.164.14 2 u 48 64 377 25.023 +0.302 0.330 2026-03-09T20:29:50.517 INFO:teuthology.orchestra.run.vm03.stdout:+adenin.s2p.de 31.209.85.242 2 u 45 64 377 24.980 +0.162 0.195 2026-03-09T20:29:50.517 INFO:teuthology.orchestra.run.vm03.stdout:*141.144.246.224 146.131.121.246 2 u 43 64 377 29.237 +0.187 0.970 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout: remote refid st t when poll reach delay offset jitter 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout:============================================================================== 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout: 0.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout: 1.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout: 2.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout: 3.ubuntu.pool.n .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout: ntp.ubuntu.com .POOL. 16 p - 64 0 0.000 +0.000 0.000 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout:-v22025082392863 129.69.253.1 2 u 47 64 377 28.234 -0.597 2.127 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout:+home.of.the.smi .BBgp. 1 u 53 64 377 38.075 +0.772 2.414 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout:-185.252.140.126 218.73.139.35 2 u 44 64 377 25.122 +2.257 1.919 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout:+vps-fra1.orlean 195.145.119.188 2 u 55 64 377 22.048 +1.579 1.854 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout:*158.101.188.125 189.97.54.122 2 u 44 64 377 21.010 +0.915 1.634 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout:-141.144.246.224 146.131.121.246 2 u 47 64 377 29.143 +2.007 4.261 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout:-server1a.meinbe 124.216.164.14 2 u 48 64 377 24.969 +1.708 2.061 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout:-185.125.190.56 79.243.60.50 2 u 3 64 377 35.357 -2.913 3.135 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout:-cp.hypermediaa. 189.97.54.122 2 u 49 64 377 25.067 +0.305 1.430 2026-03-09T20:29:50.579 INFO:teuthology.orchestra.run.vm08.stdout:+185.125.190.57 194.121.207.249 2 u - 64 377 35.285 -0.107 1.287 2026-03-09T20:29:50.579 DEBUG:teuthology.run_tasks:Unwinding manager ansible.cephlab 2026-03-09T20:29:50.581 INFO:teuthology.task.ansible:Skipping ansible cleanup... 2026-03-09T20:29:50.581 DEBUG:teuthology.run_tasks:Unwinding manager selinux 2026-03-09T20:29:50.583 DEBUG:teuthology.run_tasks:Unwinding manager pcp 2026-03-09T20:29:50.585 DEBUG:teuthology.run_tasks:Unwinding manager internal.timer 2026-03-09T20:29:50.587 INFO:teuthology.task.internal:Duration was 955.631347 seconds 2026-03-09T20:29:50.587 DEBUG:teuthology.run_tasks:Unwinding manager internal.syslog 2026-03-09T20:29:50.589 INFO:teuthology.task.internal.syslog:Shutting down syslog monitoring... 2026-03-09T20:29:50.589 DEBUG:teuthology.orchestra.run.vm03:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T20:29:50.590 DEBUG:teuthology.orchestra.run.vm04:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T20:29:50.591 DEBUG:teuthology.orchestra.run.vm08:> sudo rm -f -- /etc/rsyslog.d/80-cephtest.conf && sudo service rsyslog restart 2026-03-09T20:29:50.615 INFO:teuthology.task.internal.syslog:Checking logs for errors... 2026-03-09T20:29:50.616 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm03.local 2026-03-09T20:29:50.616 DEBUG:teuthology.orchestra.run.vm03:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T20:29:50.670 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm04.local 2026-03-09T20:29:50.670 DEBUG:teuthology.orchestra.run.vm04:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T20:29:50.683 DEBUG:teuthology.task.internal.syslog:Checking ubuntu@vm08.local 2026-03-09T20:29:50.684 DEBUG:teuthology.orchestra.run.vm08:> grep -E --binary-files=text '\bBUG\b|\bINFO\b|\bDEADLOCK\b' /home/ubuntu/cephtest/archive/syslog/kern.log | grep -v 'task .* blocked for more than .* seconds' | grep -v 'lockdep is turned off' | grep -v 'trying to register non-static key' | grep -v 'DEBUG: fsize' | grep -v CRON | grep -v 'BUG: bad unlock balance detected' | grep -v 'inconsistent lock state' | grep -v '*** DEADLOCK ***' | grep -v 'INFO: possible irq lock inversion dependency detected' | grep -v 'INFO: NMI handler (perf_event_nmi_handler) took too long to run' | grep -v 'INFO: recovery required on readonly' | grep -v 'ceph-create-keys: INFO' | grep -v INFO:ceph-create-keys | grep -v 'Loaded datasource DataSourceOpenStack' | grep -v 'container-storage-setup: INFO: Volume group backing root filesystem could not be determined' | grep -E -v '\bsalt-master\b|\bsalt-minion\b|\bsalt-api\b' | grep -v ceph-crash | grep -E -v '\btcmu-runner\b.*\bINFO\b' | head -n 1 2026-03-09T20:29:50.696 INFO:teuthology.task.internal.syslog:Gathering journactl... 2026-03-09T20:29:50.696 DEBUG:teuthology.orchestra.run.vm03:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T20:29:50.714 DEBUG:teuthology.orchestra.run.vm04:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T20:29:50.726 DEBUG:teuthology.orchestra.run.vm08:> sudo journalctl > /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T20:29:50.778 INFO:teuthology.task.internal.syslog:Compressing syslogs... 2026-03-09T20:29:50.778 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T20:29:50.779 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T20:29:50.786 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T20:29:50.786 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T20:29:50.786 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T20:29:50.786 INFO:teuthology.orchestra.run.vm03.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T20:29:50.787 INFO:teuthology.orchestra.run.vm03.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T20:29:50.795 INFO:teuthology.orchestra.run.vm03.stderr: 89.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T20:29:50.806 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest/archive/syslog -name '*.log' -print0 | sudo xargs -0 --max-args=1 --max-procs=0 --verbose --no-run-if-empty -- gzip -5 --verbose -- 2026-03-09T20:29:50.812 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T20:29:50.812 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T20:29:50.812 INFO:teuthology.orchestra.run.vm04.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T20:29:50.812 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: /home/ubuntu/cephtest/archive/syslog/kern.log: 0.0% 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T20:29:50.812 INFO:teuthology.orchestra.run.vm04.stderr: -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T20:29:50.820 INFO:teuthology.orchestra.run.vm04.stderr:/home/ubuntu/cephtest/archive/syslog/journalctl.log: 89.4% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T20:29:50.829 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/misc.log 2026-03-09T20:29:50.830 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/kern.log 2026-03-09T20:29:50.830 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/misc.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/misc.log.gz 2026-03-09T20:29:50.830 INFO:teuthology.orchestra.run.vm08.stderr:gzip -5 --verbose -- /home/ubuntu/cephtest/archive/syslog/journalctl.log 2026-03-09T20:29:50.830 INFO:teuthology.orchestra.run.vm08.stderr:/home/ubuntu/cephtest/archive/syslog/kern.log: /home/ubuntu/cephtest/archive/syslog/journalctl.log: 0.0% -- replaced with /home/ubuntu/cephtest/archive/syslog/kern.log.gz 2026-03-09T20:29:50.836 INFO:teuthology.orchestra.run.vm08.stderr: 89.1% -- replaced with /home/ubuntu/cephtest/archive/syslog/journalctl.log.gz 2026-03-09T20:29:50.838 DEBUG:teuthology.run_tasks:Unwinding manager internal.sudo 2026-03-09T20:29:50.840 INFO:teuthology.task.internal:Restoring /etc/sudoers... 2026-03-09T20:29:50.840 DEBUG:teuthology.orchestra.run.vm03:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T20:29:50.849 DEBUG:teuthology.orchestra.run.vm04:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T20:29:50.869 DEBUG:teuthology.orchestra.run.vm08:> sudo mv -f /etc/sudoers.orig.teuthology /etc/sudoers 2026-03-09T20:29:50.891 DEBUG:teuthology.run_tasks:Unwinding manager internal.coredump 2026-03-09T20:29:50.893 DEBUG:teuthology.orchestra.run.vm03:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T20:29:50.894 DEBUG:teuthology.orchestra.run.vm04:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T20:29:50.900 INFO:teuthology.orchestra.run.vm03.stdout:kernel.core_pattern = core 2026-03-09T20:29:50.911 DEBUG:teuthology.orchestra.run.vm08:> sudo sysctl -w kernel.core_pattern=core && sudo bash -c 'for f in `find /home/ubuntu/cephtest/archive/coredump -type f`; do file $f | grep -q systemd-sysusers && rm $f || true ; done' && rmdir --ignore-fail-on-non-empty -- /home/ubuntu/cephtest/archive/coredump 2026-03-09T20:29:50.918 INFO:teuthology.orchestra.run.vm04.stdout:kernel.core_pattern = core 2026-03-09T20:29:50.941 INFO:teuthology.orchestra.run.vm08.stdout:kernel.core_pattern = core 2026-03-09T20:29:50.950 DEBUG:teuthology.orchestra.run.vm03:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T20:29:50.953 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:29:50.953 DEBUG:teuthology.orchestra.run.vm04:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T20:29:50.973 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:29:50.973 DEBUG:teuthology.orchestra.run.vm08:> test -e /home/ubuntu/cephtest/archive/coredump 2026-03-09T20:29:50.996 DEBUG:teuthology.orchestra.run:got remote process result: 1 2026-03-09T20:29:50.996 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive 2026-03-09T20:29:50.998 INFO:teuthology.task.internal:Transferring archived files... 2026-03-09T20:29:50.998 DEBUG:teuthology.misc:Transferring archived files from vm03:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/641/remote/vm03 2026-03-09T20:29:50.998 DEBUG:teuthology.orchestra.run.vm03:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T20:29:51.008 DEBUG:teuthology.misc:Transferring archived files from vm04:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/641/remote/vm04 2026-03-09T20:29:51.008 DEBUG:teuthology.orchestra.run.vm04:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T20:29:51.021 DEBUG:teuthology.misc:Transferring archived files from vm08:/home/ubuntu/cephtest/archive to /archive/kyr-2026-03-09_11:23:05-orch-squid-none-default-vps/641/remote/vm08 2026-03-09T20:29:51.022 DEBUG:teuthology.orchestra.run.vm08:> sudo tar c -f - -C /home/ubuntu/cephtest/archive -- . 2026-03-09T20:29:51.045 INFO:teuthology.task.internal:Removing archive directory... 2026-03-09T20:29:51.046 DEBUG:teuthology.orchestra.run.vm03:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T20:29:51.050 DEBUG:teuthology.orchestra.run.vm04:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T20:29:51.067 DEBUG:teuthology.orchestra.run.vm08:> rm -rf -- /home/ubuntu/cephtest/archive 2026-03-09T20:29:51.093 DEBUG:teuthology.run_tasks:Unwinding manager internal.archive_upload 2026-03-09T20:29:51.095 INFO:teuthology.task.internal:Not uploading archives. 2026-03-09T20:29:51.096 DEBUG:teuthology.run_tasks:Unwinding manager internal.base 2026-03-09T20:29:51.123 INFO:teuthology.task.internal:Tidying up after the test... 2026-03-09T20:29:51.123 DEBUG:teuthology.orchestra.run.vm03:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T20:29:51.124 DEBUG:teuthology.orchestra.run.vm04:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T20:29:51.125 DEBUG:teuthology.orchestra.run.vm08:> find /home/ubuntu/cephtest -ls ; rmdir -- /home/ubuntu/cephtest 2026-03-09T20:29:51.125 INFO:teuthology.orchestra.run.vm03.stdout: 258076 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 20:29 /home/ubuntu/cephtest 2026-03-09T20:29:51.127 INFO:teuthology.orchestra.run.vm04.stdout: 258077 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 20:29 /home/ubuntu/cephtest 2026-03-09T20:29:51.136 INFO:teuthology.orchestra.run.vm08.stdout: 258079 4 drwxr-xr-x 2 ubuntu ubuntu 4096 Mar 9 20:29 /home/ubuntu/cephtest 2026-03-09T20:29:51.137 DEBUG:teuthology.run_tasks:Unwinding manager console_log 2026-03-09T20:29:51.142 INFO:teuthology.run:Summary data: description: orch/cephadm/workunits/{0-distro/ubuntu_22.04 agent/off mon_election/classic task/test_monitoring_stack_basic} duration: 955.6313469409943 flavor: default owner: kyr success: true 2026-03-09T20:29:51.142 DEBUG:teuthology.report:Pushing job info to http://localhost:8080 2026-03-09T20:29:51.191 INFO:teuthology.run:pass